·11 min read

AI Coding Rules for Rust Projects

Ready-to-use AI coding rules for Rust projects covering ownership, error handling with Result and Option, trait patterns, async code, and Cargo conventions.

AI Coding Rules for Rust Projects

AI coding tools are getting better at Rust, but they still get tripped up by the things that make Rust different from every other language. Ownership moves catch them off guard. They reach for .unwrap() instead of proper error propagation. They generate trait implementations that compile but don't follow community conventions. And they almost never get your project's module structure right on the first try.

The fix is the same as it is for any language: give the AI explicit rules. But Rust rules look different from React rules or Python rules. The compiler enforces so much already that your rules can focus on the design decisions the compiler can't make for you -- idiomatic patterns, error handling strategy, crate selection, and project organization.

This guide covers the rules you should set for a Rust project, with copy-paste examples you can drop into your Cursor rules, CLAUDE.md, or any other AI tool's config.


Start with the project context

Every rules file should open with your stack. In Rust projects, this means your edition, key crates, and what kind of binary or library you're building:

## Tech stack
- Language: Rust (2024 edition)
- Build: Cargo workspaces
- Async runtime: Tokio
- Web framework: Axum
- Database: SQLx with PostgreSQL
- Serialization: serde + serde_json
- Error handling: thiserror for libraries, anyhow for binaries
- Testing: built-in #[test] + cargo-nextest
- Linting: clippy (pedantic)

This immediately steers the AI away from wrong defaults. Without it, you'll get code that uses reqwest when you use ureq, or actix-web when you're on axum, or the 2021 edition when you've moved to 2024. State your stack, and the AI will follow it.


Ownership and borrowing rules

This is where AI tools struggle the most with Rust. They'll clone everything to avoid borrow checker errors, or they'll take ownership when a reference would be fine. Neither is what you want.

## Ownership and borrowing

- Prefer borrowing (&T, &mut T) over cloning. Only clone when:
  - The data needs to outlive the current scope
  - You're sending data to another thread
  - The type is cheap to clone (small Copy types, Arc, etc.)
- Never use .clone() just to satisfy the borrow checker. Fix the lifetime issue instead.
- Use &str for function parameters that only read string data, not String
- Return String when the function creates a new string, &str when returning borrowed data
- Use Cow<str> when a function sometimes allocates and sometimes borrows
- For struct fields that own their data, use String, Vec<T>, etc.
- For struct fields that borrow, add explicit lifetime annotations

A concrete example helps even more:

## Function signature conventions

Prefer this:
  fn process_name(name: &str) -> Result<String>

Not this:
  fn process_name(name: String) -> Result<String>

Take ownership only when you actually need to store or move the value:
  struct User {
      name: String,  // owns the data
  }

  impl User {
      fn name(&self) -> &str {  // borrows for reading
          &self.name
      }
  }

Without these rules, AI tools default to the path of least resistance: taking String everywhere and cloning when the borrow checker complains. That compiles, but it's wasteful and un-idiomatic.


Error handling with Result and Option

Error handling is the second biggest source of bad AI-generated Rust. The default AI behavior is .unwrap() on everything, which is fine for a quick prototype but unacceptable in production code.

## Error handling

- Never use .unwrap() or .expect() in library code or production paths
- Use .unwrap() only in tests and examples where panicking is acceptable
- Use the ? operator for error propagation
- For libraries: define error types with thiserror
- For application/binary code: use anyhow::Result for convenience
- For Option types, prefer pattern matching or combinators over .unwrap()
- Map errors at module boundaries -- don't leak implementation details

### Error type pattern (libraries)

use thiserror::Error;

#[derive(Debug, Error)]
pub enum AppError {
    #[error("database error: {0}")]
    Database(#[from] sqlx::Error),

    #[error("not found: {entity} with id {id}")]
    NotFound { entity: &'static str, id: String },

    #[error("validation failed: {0}")]
    Validation(String),
}

### Result alias

pub type Result<T> = std::result::Result<T, AppError>;

Also set rules for how to handle Option types, since AI tools tend to reach for .unwrap() on those too:

## Working with Option

Prefer combinators and pattern matching:
  // Good: use if let
  if let Some(user) = find_user(id) {
      process(user);
  }

  // Good: use map/and_then for transforms
  let name = user.middle_name.as_deref().unwrap_or("N/A");

  // Good: use ? with ok_or for converting to Result
  let user = find_user(id).ok_or(AppError::NotFound {
      entity: "user",
      id: id.to_string(),
  })?;

  // Bad: panic on None
  let user = find_user(id).unwrap();

Struct and type design

AI tools generate structs that work but don't follow Rust conventions. Setting explicit rules here saves a lot of cleanup:

## Struct design

- Derive Debug on all types
- Derive Clone, PartialEq, Eq where appropriate
- Derive serde::Serialize and serde::Deserialize on types that cross API boundaries
- Use #[serde(rename_all = "camelCase")] for JSON-facing types
- Order derives alphabetically: Clone, Debug, Deserialize, Eq, Hash, PartialEq, Serialize
- Make struct fields private by default; expose via methods
- Use the builder pattern or constructor functions, not public fields, for types with invariants

### Example

#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct CreateUserRequest {
    pub email: String,
    pub display_name: String,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub avatar_url: Option<String>,
}

Trait patterns

Traits are where Rust's type system shines, but AI tools tend to either avoid them or over-use them. Set clear guidance:

## Traits

- Use traits to define behavior boundaries, not just for polymorphism
- Prefer impl Trait in argument position for simple cases:
    fn process(reader: impl Read) -> Result<()>
- Use dyn Trait (trait objects) when you need dynamic dispatch or heterogeneous collections
- Use generic bounds (where clauses) when the function body needs multiple trait constraints
- Always implement Display for error types
- Implement From<T> for error conversions instead of manual mapping
- Keep trait definitions small and focused -- prefer multiple small traits over one large trait

### Async traits

- Use async fn in traits directly (stabilized in Rust 1.75)
- If you need the trait to be object-safe, use the async_trait crate

Module and project structure

Rust's module system is different enough from other languages that AI tools get it wrong often. Be explicit about your project layout:

## Project structure

src/
  main.rs          # Entry point, minimal logic
  lib.rs           # Public API surface (re-exports)
  config.rs        # Configuration loading
  error.rs         # Error types
  db/
    mod.rs         # Database module
    queries.rs     # Query functions
    models.rs      # Database row types
  api/
    mod.rs         # Route definitions
    handlers.rs    # Request handlers
    middleware.rs  # Middleware
  domain/
    mod.rs         # Business logic
    user.rs        # User domain types and operations

## Module conventions
- Use mod.rs files for module directories
- Re-export public types from mod.rs
- Keep main.rs thin -- just setup and startup
- Separate database models from domain types
- Put integration tests in tests/ directory, unit tests in the same file as the code

Without this rule, AI tools will dump everything into main.rs or create a random file structure that doesn't match your project's conventions. The module structure rule is especially important for larger projects.


Cargo and dependency conventions

AI tools will add dependencies without thinking. Set boundaries:

## Cargo conventions

- Do not add new crate dependencies without asking first
- Use workspace dependencies in Cargo.toml for shared crates
- Pin dependency versions in Cargo.toml (use "1.2.3", not "1")
- Prefer well-maintained crates from the Rust ecosystem:
  - HTTP client: reqwest
  - JSON: serde_json
  - CLI parsing: clap (derive)
  - Logging: tracing + tracing-subscriber
  - Async: tokio
  - Database: sqlx
- Run cargo clippy --all-targets before committing
- Run cargo fmt before committing
- All public items must have doc comments (///)

The "do not add dependencies without asking" rule is critical. Without it, AI tools will pull in crates for things you can do with the standard library, or pick an obscure crate instead of the one your project already uses.


Async Rust rules

If your project uses async Rust (and most non-trivial projects do), you need specific rules for it:

## Async patterns

- Use Tokio as the async runtime -- do not mix runtimes
- Mark functions as async only when they perform I/O or call other async functions
- Do not use block_on inside async code
- Use tokio::spawn for concurrent tasks, not std::thread::spawn
- Use tokio::select! for racing multiple futures
- Prefer structured concurrency: spawn tasks and join them, don't fire and forget
- For CPU-heavy work inside async code, use tokio::task::spawn_blocking
- Always set timeouts on network operations:
    tokio::time::timeout(Duration::from_secs(30), async_operation()).await?

AI tools frequently mix sync and async code in ways that either deadlock or block the executor. The spawn_blocking rule is especially important -- without it, the AI will put CPU-intensive operations directly in async functions, starving other tasks.


Testing rules

Rust has a strong built-in testing story, but AI tools still need guidance on your conventions:

## Testing

- Unit tests go in a #[cfg(test)] mod tests block at the bottom of each file
- Integration tests go in the tests/ directory
- Use #[tokio::test] for async tests
- Test error cases, not just happy paths
- Use assert_eq! and assert_ne! for comparisons (better error messages)
- For test fixtures, use a helper function that returns test data -- not global state
- Mock external services with traits, not with conditional compilation

### Test structure

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn parses_valid_email() {
        let result = parse_email("user@example.com");
        assert!(result.is_ok());
    }

    #[test]
    fn rejects_empty_email() {
        let result = parse_email("");
        assert!(result.is_err());
    }

    #[tokio::test]
    async fn creates_user_in_database() {
        let pool = setup_test_db().await;
        let user = create_user(&pool, "test@example.com").await.unwrap();
        assert_eq!(user.email, "test@example.com");
    }
}

A complete rules file for Rust

Here's a starter rules file you can adapt for your Rust project:

# Project: [Your App Name]

## Stack
- Rust 2024 edition
- Async runtime: Tokio
- Web framework: Axum
- Database: SQLx + PostgreSQL
- Serialization: serde
- Error handling: thiserror (lib), anyhow (bin)
- Linting: clippy (pedantic), rustfmt

## Ownership
- Prefer &str over String in function parameters
- Never .clone() just to fix borrow checker errors
- Use Cow<str> when a function may or may not allocate

## Error handling
- No .unwrap() in production code
- Use ? for propagation
- Define typed errors with thiserror
- Map errors at module boundaries

## Types
- Derive Debug on everything
- #[serde(rename_all = "camelCase")] on API-facing types
- Private fields with accessor methods for types with invariants

## Async
- Tokio only, no mixing runtimes
- spawn_blocking for CPU-heavy work
- Timeouts on all network operations

## Dependencies
- Do not add crates without asking
- Use workspace dependencies for shared crates

## Testing
- Unit tests in #[cfg(test)] mod tests at end of file
- Integration tests in tests/
- #[tokio::test] for async tests
- Test both success and error paths

## Formatting
- cargo fmt before every commit
- cargo clippy --all-targets must pass with no warnings

Why language-specific rules matter

Generic rules like "write clean code" don't work in Rust any more than they work anywhere else. But Rust-specific rules carry extra weight because the language has strong opinions. The borrow checker enforces memory safety, but it can't enforce idiomatic usage. It can't tell the AI to use &str instead of String, to propagate errors with ? instead of .unwrap(), or to put tests in the same file instead of a separate directory.

Your AI tool doesn't know your crate choices, your module layout, or whether you prefer thiserror or anyhow. It doesn't know you run clippy --pedantic in CI or that you organize your code into domain, api, and db layers. Rules fill that gap.

For general principles that apply across all languages, read 10 Best Practices for Writing AI Coding Rules. And if your Rust project is part of a larger monorepo with multiple languages, check out how to manage AI rules in a monorepo for strategies on layering language-specific rules on top of shared conventions.


Keeping rules in sync

Writing rules is the first step. Keeping them consistent across your team is the ongoing challenge. When one developer updates the error handling pattern and another doesn't pull the change, you end up with two different conventions in the same codebase.

localskills.sh solves this by letting you publish your Rust rules as a versioned skill. Everyone installs from the same source. When you update the rules, everyone gets the new version:

localskills install your-team/rust-conventions --target cursor claude windsurf

The format differences between tools are handled automatically. One skill works in Cursor's .mdc files, Claude Code's CLAUDE.md, and Windsurf's .windsurfrules.


Ready to standardize your Rust conventions across your team and every AI tool you use? Create your free account and publish your first skill today.

npm install -g @localskills/cli
localskills login
localskills publish