AI Coding Rules for Go Projects
Practical AI coding rules for Go projects covering error handling, interface design, package structure, and testing patterns for Cursor and Claude Code.
AI Coding Rules for Go Projects
Go has strong opinions about how code should be written. The standard library sets clear patterns, gofmt enforces a single formatting style, and the community has over a decade of established idioms around error handling, package design, and concurrency.
AI coding assistants know all of this, in theory. In practice, they get the details wrong. They wrap errors without adding context. They define interfaces in the wrong package. They reach for channels when a mutex would be simpler. They generate init() functions and package-level variables when explicit initialization would be cleaner.
The fix is the same as with any language: write rules that encode your project's specific conventions. Go just happens to have a set of idioms that are unusually well-defined, which makes writing good rules straightforward once you know what to specify.
This guide covers the rules that matter most for Go projects, with copy-paste examples you can drop into your Cursor rules, CLAUDE.md, or Windsurf rules file.
Error handling
This is where AI tools struggle the most with Go. The language's explicit error handling is one of its best features, but AI-generated code frequently handles errors poorly: returning bare err without context, using fmt.Errorf inconsistently, or worse, silently ignoring errors with _.
## Error handling
- Always wrap errors with context using fmt.Errorf and %w:
return fmt.Errorf("failed to fetch user %s: %w", userID, err)
- Never return bare errors without added context
- Never use _ to discard errors unless the function truly cannot fail in practice
- Use errors.Is() and errors.As() for error checking, never == comparison on error strings
- Define sentinel errors as package-level variables:
var ErrNotFound = errors.New("not found")
var ErrPermissionDenied = errors.New("permission denied")
- For functions that return (T, error), always check the error before using T
- Do not use panic() for expected error conditions. Panic is reserved for programmer bugs.
The wrapping rule is the most important one here. Without it, you get error messages like "connection refused" bubbling up through six layers of the call stack with no indication of what operation actually failed. With wrapping, you get "creating order: charging payment: calling stripe API: connection refused", which tells you exactly where to look.
A secondary rule worth adding if your project uses custom error types:
## Custom errors
- Use typed errors for domain-specific failure modes:
type ValidationError struct {
Field string
Message string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("validation failed on %s: %s", e.Field, e.Message)
}
- Callers should use errors.As() to check for typed errors:
var valErr *ValidationError
if errors.As(err, &valErr) {
// handle validation failure
}
Interface design
Go interfaces are small by convention. The standard library's io.Reader and io.Writer are one method each. But AI tools, especially those trained on Java and C# codebases, tend to generate large interfaces with five or six methods that couple the caller to a specific implementation.
## Interfaces
- Define interfaces in the package that USES them, not the package that implements them
- Keep interfaces small: 1-3 methods is ideal
- Name single-method interfaces with the -er suffix: Reader, Writer, Closer, Validator
- Do not define interfaces preemptively. Write concrete types first, extract interfaces only when you need polymorphism or testability
- Accept interfaces, return structs:
// Good: function accepts an interface
func ProcessRecords(r RecordReader) error { ... }
// Good: function returns a concrete type
func NewUserService(db *sql.DB) *UserService { ... }
// Bad: function returns an interface
func NewUserService(db *sql.DB) UserServiceInterface { ... }
The "define interfaces where they're used" rule is critical and frequently violated by AI-generated code. If package storage defines a StorageInterface and package handler imports it, the handler is now coupled to the storage package's definition. If instead the handler defines its own type DataStore interface { Get(id string) ([]byte, error) }, it only depends on the behavior it actually needs.
Package structure
AI tools will generate reasonable package names, but they often get the boundaries wrong. They create too many small packages, or they lump unrelated functionality together. Explicit rules about your project layout prevent both problems.
## Package structure
This project follows standard Go project layout:
cmd/
server/ # main package, application entry point
worker/ # background job runner entry point
internal/
handler/ # HTTP handlers
service/ # Business logic
repository/ # Database access
model/ # Domain types and structs
middleware/ # HTTP middleware
pkg/
client/ # Public client library (if applicable)
Rules:
- The cmd/ directory contains main packages only. No business logic here.
- internal/ is for application-private code. Use it for everything not meant to be imported by other modules.
- Do not create packages named "util", "helpers", or "common". Put functions in the package that uses them, or name the package after what it does (e.g., "validation", "httputil").
- Package names are singular, lowercase, one word when possible: "user" not "users", "auth" not "authentication".
- Avoid circular imports. If two packages need each other, extract shared types into a third package (usually model/).
The ban on util packages deserves emphasis. Every Go project that allows a util package eventually ends up with a 2000-line file full of unrelated functions. AI tools accelerate this problem because they default to dumping helper functions into whatever catch-all package already exists.
Concurrency patterns
Go makes concurrency easy to start and hard to get right. AI tools will happily generate goroutines without thinking about lifecycle management, synchronization, or graceful shutdown. Rules here prevent the most common problems.
## Concurrency
- Always pass context.Context as the first parameter to functions that do I/O or may block
- Every goroutine must have a clear shutdown path. Use context cancellation or done channels.
- Use errgroup (golang.org/x/sync/errgroup) for coordinating groups of goroutines that return errors:
g, ctx := errgroup.WithContext(ctx)
g.Go(func() error { ... })
if err := g.Wait(); err != nil { ... }
- Prefer sync.Mutex over channels for protecting shared state
- Prefer channels for communicating between goroutines that produce/consume values
- Never start a goroutine in a library function without giving the caller control over its lifetime
- Always use sync.WaitGroup or errgroup to wait for goroutines before returning from a function
The rule about goroutine lifetime is one that AI tools violate constantly. A generated function might spawn a goroutine to "do something in the background" without any mechanism to wait for it, cancel it, or know when it's done. This leads to goroutine leaks in tests and data races in production.
Struct and method patterns
## Structs and methods
- Use constructor functions named New[Type] or New:
func NewOrderService(repo OrderRepository, logger *slog.Logger) *OrderService { ... }
- Use pointer receivers for methods that modify state or are on large structs
- Use value receivers for small, immutable types (coordinates, timestamps, IDs)
- Be consistent: if any method on a type uses a pointer receiver, all methods should
- Prefer struct embedding for reusing behavior, not for "inheritance":
type Server struct {
http.Handler // embedding for implementing the interface
logger *slog.Logger
}
- Use functional options for constructors with many optional parameters:
func NewClient(baseURL string, opts ...Option) *Client { ... }
HTTP handlers
If your project is an HTTP API, you need rules about how handlers are structured. Without them, AI tools will mix handler patterns: sometimes using the standard library, sometimes using whatever framework it thinks you're using.
## HTTP handlers
This project uses the standard library net/http (Go 1.22+ routing).
- Handler functions take (w http.ResponseWriter, r *http.Request)
- Use r.PathValue("name") for path parameters (Go 1.22+ ServeMux patterns)
- Parse request bodies with json.NewDecoder(r.Body).Decode(&input)
- Always validate input before processing
- Return JSON responses with a helper:
func writeJSON(w http.ResponseWriter, status int, data any) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
json.NewEncoder(w).Encode(data)
}
- Use consistent response shapes:
Success: { "data": ... }
Error: { "error": "message here" }
- Group routes by domain in the router setup:
mux.HandleFunc("GET /api/users/{id}", h.GetUser)
mux.HandleFunc("POST /api/users", h.CreateUser)
If you're using a framework like Chi, Echo, or Gin instead, swap in the corresponding patterns. The important thing is that the AI knows which one to use and how your project structures its handlers.
Testing
Go's testing package is simple by design, but AI tools will still generate tests that don't match your conventions. Table-driven tests, the standard Go pattern, are worth enforcing explicitly.
## Testing
- Use table-driven tests for functions with multiple input/output cases:
func TestParseAmount(t *testing.T) {
tests := []struct {
name string
input string
want int64
wantErr bool
}{
{name: "valid dollars", input: "42.00", want: 4200},
{name: "no cents", input: "42", want: 4200},
{name: "invalid", input: "abc", wantErr: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseAmount(tt.input)
if (err != nil) != tt.wantErr {
t.Fatalf("ParseAmount(%q) error = %v, wantErr %v", tt.input, err, tt.wantErr)
}
if got != tt.want {
t.Errorf("ParseAmount(%q) = %v, want %v", tt.input, got, tt.want)
}
})
}
}
- Use t.Helper() in test helper functions so error locations report correctly
- Use testify/require for assertions that should stop the test, testify/assert for soft checks
- Test files live alongside source: user.go and user_test.go in the same package
- Use _test package suffix for black-box tests (e.g., package user_test) to test the public API
- Use the same package (no suffix) for white-box tests when you need access to internals
- Mock interfaces with hand-written mocks or a tool like mockgen. Do not use reflection-based mocking.
Logging and observability
## Logging
- Use slog (log/slog) for structured logging. Do not use fmt.Println or log.Printf.
- Pass the logger as an explicit dependency, not as a global:
type Server struct {
logger *slog.Logger
}
- Use structured fields, not string interpolation:
s.logger.Info("order created", "order_id", order.ID, "user_id", userID) // good
s.logger.Info(fmt.Sprintf("order %s created by %s", order.ID, userID)) // bad
- Log at appropriate levels:
Debug: internal state useful during development
Info: normal operations (request handled, job completed)
Warn: recoverable issues (retry succeeded, deprecated feature used)
Error: failures that need attention (request failed, connection lost)
A complete Go rules file
Here's a starter rules file that pulls together the patterns above. Adapt it to your specific project:
# Project: [Your Go Service]
## Stack
- Go 1.22+
- Standard library net/http with Go 1.22 ServeMux patterns
- PostgreSQL with pgx
- slog for structured logging
- Docker for deployment
## Error handling
- Wrap all errors with fmt.Errorf and %w
- Define sentinel errors as package-level vars
- Use errors.Is/errors.As, never string comparison
- No panic for expected error conditions
## Interfaces
- Define where used, not where implemented
- 1-3 methods maximum
- Accept interfaces, return structs
- Extract only when needed for testing or polymorphism
## Package design
- cmd/ for entry points, internal/ for app code
- No util/helpers/common packages
- Singular, lowercase package names
- No circular imports
## Concurrency
- context.Context as first parameter for I/O functions
- errgroup for goroutine coordination
- Every goroutine has a shutdown path
- Mutex for shared state, channels for communication
## HTTP handlers
- Standard net/http handler signature
- r.PathValue() for path params
- json.NewDecoder for request parsing
- Consistent JSON response shapes
## Testing
- Table-driven tests with t.Run subtests
- testify for assertions
- Hand-written or mockgen mocks
- Test files alongside source files
## Logging
- slog only, no fmt.Println
- Structured fields, not string formatting
- Logger as explicit dependency
Rules only work if they stay current
Writing rules once and forgetting about them is a recipe for drift. Your project evolves: you upgrade to a new Go version, swap out your HTTP framework, or change your error handling strategy. If the rules don't update with the code, the AI starts generating patterns that conflict with what your team actually does.
For a solo project, keeping one file updated is manageable. For a team, it breaks down fast. Someone updates the error handling rules in their local copy, someone else doesn't pull the change, and now the AI is generating two different error wrapping patterns depending on who's asking.
The practical solution is to treat rules like a shared dependency. Publish them once, install them everywhere, and update them in one place when conventions change. This is what localskills.sh does: your Go rules become a versioned skill that every team member installs with one command. When you update the rules, everyone pulls the new version.
You can learn more about keeping rules consistent across different AI tools in AI coding rules across tools, and see practical examples of how other teams structure their rules in real cursorrules examples. For the general principles behind writing effective rules regardless of language, check out AI coding rules best practices.
localskills install your-team/go-rules --target cursor claude windsurf
One command, every tool, same conventions. No more drift.
Ready to standardize your Go project's AI rules across your entire team? Create your free account and publish your first skill today.
npm install -g @localskills/cli
localskills login
localskills publish