How to Enforce Coding Standards with AI Assistants
Move beyond linters. Use AI coding rules to enforce architectural patterns, naming conventions, and design decisions that static analysis can't catch.
The limits of linters and formatters
Prettier formats your code. ESLint catches common mistakes. TypeScript's compiler prevents type errors. These tools are indispensable, but they only cover a narrow slice of what "coding standards" actually means on a real team.
Ask any engineering lead what they actually care about and you'll hear things like:
- "We don't use Redux anymore. We use Zustand. But the AI keeps generating Redux code."
- "Every API route needs to log to our observability platform before returning."
- "New engineers keep writing class components even though we're fully on hooks."
- "The AI suggested a third-party library we deliberately decided not to use."
None of these can be caught by a linter. They require context: knowledge of your team's architecture, history, and deliberate decisions. That's exactly what AI coding rules are designed to carry.
What AI rules can enforce that static analysis can't
Static analysis tools work on syntax and types. AI coding rules work on intent and patterns. Here's the difference in practice:
| What you want to enforce | ESLint / Prettier | AI coding rules |
|---|---|---|
| Consistent indentation | Yes | Yes |
No var, use const/let | Yes | Yes |
Use our logging library, not console.log | Partial | Yes |
| Our API route structure and error format | No | Yes |
| Preferred state management approach | No | Yes |
| Architecture layer boundaries | No | Yes |
| When to split a component vs. keep it inline | No | Yes |
| Our naming convention for event handlers | Partial | Yes |
| Don't add dependencies without team approval | No | Yes |
| Use our custom hooks before reaching for a library | No | Yes |
The bottom rows are where AI rules shine. These are the standards that only exist in senior engineers' heads: the institutional knowledge that gets lost when people leave, and violated when someone new joins.
Three categories of standards worth encoding
1. Architectural patterns
Architectural decisions are the hardest to enforce because they're invisible to static tools. But they're the most expensive to violate. A wrongly-chosen pattern creates debt that takes weeks to unwind.
Example rules that enforce architecture:
## State management
Use Zustand for all global state. Do NOT use Redux, Context API for shared state,
or React Query's cache as a state store.
Store files live in `src/stores/`. Name them `use[Feature]Store.ts`.
Each store should be a single `create()` call with typed state and actions.
## Data fetching
Use React Query for all server state. Do not use useEffect + useState for data fetching.
Queries belong in `src/hooks/queries/`. Mutations belong in `src/hooks/mutations/`.
2. Naming conventions
Teams develop naming conventions for good reasons: clarity, searchability, consistency. But AI assistants don't know your conventions unless you tell them.
## Naming conventions
- Components: PascalCase (`UserProfileCard`, not `userProfileCard` or `user-profile-card`)
- Event handlers: prefix with `handle` (`handleSubmit`, `handleUserClick`)
- Boolean variables: prefix with `is`, `has`, `can`, `should` (`isLoading`, `hasPermission`)
- Custom hooks: prefix with `use` (`useCurrentUser`, `useFeatureFlag`)
- Constants: SCREAMING_SNAKE_CASE only for true constants, not config objects
- Types and interfaces: PascalCase without `I` prefix (`User`, not `IUser`)
3. Design decisions and off-limits choices
Every codebase has things that were tried and rejected. Document them explicitly so the AI doesn't resurrect them.
## Do NOT use
- `moment.js` — use `date-fns` instead (tree-shakeable, modern)
- CSS Modules — we use Tailwind CSS v4 throughout
- `axios` — use the native `fetch` API with our `apiFetch` wrapper in `src/lib/api.ts`
- Class components — all components are functional with hooks
- `any` type in TypeScript — use `unknown` and narrow, or define the correct type
- Direct DOM manipulation — always work through React's virtual DOM
When an AI assistant has this context, it stops suggesting the patterns you've deliberately moved away from.
Setting up team-wide rules
The mechanics differ by tool, but the workflow is the same: write rules once, share them with everyone.
For Cursor teams
Create a .cursor/rules/ folder at the project root and split rules by domain:
.cursor/rules/
general.mdc # Stack overview, coding style
architecture.mdc # Layer boundaries, patterns to follow
naming.mdc # Naming conventions for everything
api.mdc # API route structure and error handling
testing.mdc # Test patterns, what to test, mocking conventions
off-limits.mdc # Things we don't do and why
Commit these to git. Every developer who opens the project gets the same context automatically.
For Claude Code teams
Claude Code reads from CLAUDE.md at the project root and can load additional context from .claude/skills/. A well-structured CLAUDE.md covers the same ground as Cursor's rules folder.
See our guide to writing effective CLAUDE.md files for patterns that work well across both tools.
The multi-tool problem
The real challenge for most teams isn't writing the rules. It's keeping them synchronized across tools and repositories. If you maintain separate rule files for Cursor, Claude Code, and Windsurf, you'll inevitably end up with drift. One tool gets an important update while the others lag behind.
This is the core problem localskills.sh was built to solve: publish your standards once, install them everywhere.
Versioning your standards
Coding standards aren't static. Your team makes new decisions, adopts new patterns, discovers better approaches. Your rules need to evolve with your codebase, and that evolution needs to be managed.
Treating standards as versioned packages gives you three important capabilities:
Rollback. If a standards update causes problems (the AI starts generating code that breaks something), you can pin to the previous version while you investigate.
Staged rollouts. New standards can be introduced on a branch or in a single repository before being pushed team-wide. Evaluate the impact before committing.
Changelog. When someone asks "why does the AI generate code this way?", you can point to the version where that rule was introduced and the rationale behind it.
This is exactly how software dependency management works, and coding standards deserve the same discipline.
Measuring compliance
How do you know if your AI rules are actually working? A few signals to track:
Code review friction. If reviewers are frequently commenting on the same patterns ("we don't use Redux," "this should use our logging library"), your rules may not be specific enough or the AI isn't applying them consistently.
Rule specificity audit. Review your rules quarterly. Vague rules ("write clean code") have no effect. Specific rules with examples ("use our logger.info() function from src/lib/logger.ts, not console.log") do.
Onboarding time. New engineers who use AI assistants with your rules configured should ramp up faster. They should generate code that passes review on the first try more often. Track this informally by asking new engineers what slowed them down.
Drift detection. If you maintain rules across multiple repositories, check periodically that they're in sync. A rule that was updated in one repo but not others is a gap in your coverage.
A practical setup for engineering teams
Here's a workflow that scales from a 5-person startup to a 200-person engineering org:
Step 1: Audit your code review comments. Collect the last 30 days of review feedback. Group the comments by category. The most common patterns are what your rules should encode first.
Step 2: Write rules, not aspirations. "Write testable code" is an aspiration. "Every function that makes a network call must have a corresponding test that mocks the HTTP client using our createMockFetch helper" is a rule.
Step 3: Publish to a shared registry. Use localskills.sh to publish your standards as a versioned skill. This gives every repository a single install command to pull in team standards.
Step 4: Install on onboarding. Add rule installation to your onboarding checklist. New engineers run localskills install your-team/standards on day one.
Step 5: Maintain as a living document. Assign ownership. When the team makes a new architectural decision, someone updates the skill and bumps the version.
For more on this workflow, see building team-wide AI coding standards and what agent skills are and how they work.
From per-project rules to organization-wide standards
As your team grows, you may want to layer your rules:
- Organization-wide: Security patterns, observability requirements, approved libraries
- Platform-specific: Frontend conventions vs. backend conventions vs. data pipeline conventions
- Project-specific: This repository's unique architecture and constraints
With localskills.sh, you can install multiple skills and they compose together. A repository might install your-org/base-standards, your-org/frontend-standards, and your-team/checkout-service-rules, with each layer adding specificity without duplicating the base.
localskills install your-org/base-standards --target cursor claude
localskills install your-org/frontend-standards --target cursor claude
localskills install your-team/checkout-service-rules --target cursor claude
Each skill is versioned independently. You can update base standards without touching team-specific rules.
If you're starting from scratch, publishing your first skill walks through the process end to end.
Linters enforce syntax. AI coding rules enforce intent. The teams that win on code quality are the ones that treat their AI context as a first-class engineering artifact: versioned, maintained, and shared like any other critical dependency.
Create a free account on localskills.sh and publish your team's first coding standards skill today.
npm install -g @localskills/cli
localskills login
localskills publish