How to Reduce AI Hallucinations in Code
Practical strategies to prevent AI coding tools from generating incorrect APIs, phantom libraries, and broken logic. Tips on context, rules, and verification.
The hallucination problem in AI-generated code
AI coding tools make up function signatures that don't exist. They import libraries that were never installed. They call APIs with the wrong parameters, invent configuration options, and confidently produce code that looks correct but fails at runtime.
These are hallucinations, and they are the single biggest source of wasted time when working with AI code generation tools. Unlike a syntax error that your editor catches immediately, a hallucinated API call compiles fine, passes a superficial review, and only breaks when it hits production or when someone actually reads the code carefully.
The good news: hallucinations are not random. They follow predictable patterns, and the techniques to prevent them are straightforward. This guide covers why they happen and what you can do about it in tools like Cursor, Claude Code, Windsurf, and GitHub Copilot.
Why AI coding tools hallucinate
Understanding the root causes helps you target your prevention efforts.
The model is filling gaps with plausible patterns
Large language models generate code by predicting the most likely next token based on patterns they learned during training. When the model doesn't have enough context about your specific codebase, framework version, or API, it fills the gap with something that looks statistically plausible.
This is why hallucinations often look correct at first glance. The model isn't producing random gibberish. It's producing code that would be correct in some project, just not yours.
Training data is stale
Models are trained on data with a cutoff date. If you're using a library that released a breaking change after that cutoff, the model will generate code for the old API. It has no way of knowing the API changed. It will use the old function signatures with full confidence.
This is especially common with fast-moving frameworks like Next.js, where the difference between version 14 and 15 includes breaking changes to caching defaults, async request APIs, and data fetching behavior.
Context windows have hard limits
Every AI coding tool has a finite context window. When that window fills up, older context gets dropped. If your project instructions, the current file, and the conversation history exceed the limit, the model starts losing information it needs to generate accurate code.
The result: early instructions get forgotten, reference files disappear from context, and the model falls back to generic patterns instead of your specific conventions.
Ambiguous prompts invite guessing
When your prompt says "add authentication," the model has to decide what kind of authentication, what library, what session strategy, what token format, and dozens of other details. Every decision it makes without your input is a potential hallucination.
The most common hallucinations in code generation
Knowing what to look for makes hallucinations easier to catch during review.
Phantom imports and non-existent packages
The model imports a package that sounds right but doesn't exist, or imports from a path that isn't in your project:
// The model invented this package
import { validateSchema } from '@utils/schema-validator';
// This function doesn't exist in your version of the library
import { useServerAction } from 'next/server';
This happens because the model has seen similar import patterns across thousands of projects. It pattern-matches a plausible import path rather than checking what actually exists in your node_modules or src directory.
Wrong API signatures
The model calls a real function but with incorrect parameters, wrong return types, or deprecated options:
// Real function, wrong signature for your version
const result = await db.query.users.findFirst({
where: { email: input.email }, // Wrong: your ORM uses eq() syntax
});
// Deprecated option that was removed two versions ago
const response = await fetch(url, {
mode: 'navigate', // Not valid in this context
});
Invented configuration options
The model adds configuration keys that don't exist in the library you're using:
// These options don't exist in this version
export default defineConfig({
experimental: {
serverComponents: true,
streamingSSR: true,
optimizeCss: 'auto',
},
});
This is particularly dangerous because invalid config options are often silently ignored, so the code appears to work while doing nothing.
Fabricated utility functions
The model calls helper functions that don't exist in your codebase, assuming they must be there because the pattern is common:
// Your project doesn't have a formatCurrency utility
const price = formatCurrency(amount, 'USD');
// This helper doesn't exist in your auth library
const user = await getCurrentUser(request);
Strategy 1: Give the model your exact stack and versions
The single most effective hallucination reducer is telling the model exactly what you're working with. In your project instructions file (.cursorrules, CLAUDE.md, or equivalent), specify:
## Tech stack
- Next.js 15.1 (App Router only, no Pages Router)
- TypeScript 5.7 strict mode
- Drizzle ORM 0.34 with PostgreSQL
- Tailwind CSS v4
- Auth: NextAuth v5 beta (not v4)
When the model knows you're on Next.js 15 with the App Router, it stops generating Pages Router patterns. When it knows you're on Drizzle ORM 0.34, it uses the correct query syntax instead of guessing.
Version numbers matter. "Next.js" alone still leaves room for the model to generate code for any version from 12 to 15. "Next.js 15.1, App Router only" narrows the output to the patterns that actually work in your project.
Read our full guide on AI code generation best practices for more on structuring project context.
Strategy 2: Use rules files as guardrails
Beyond listing your stack, rules files can explicitly prevent specific hallucinations your team encounters repeatedly. This is where AI coding rules best practices become a hallucination prevention tool.
Document the patterns the AI should follow and the mistakes it should avoid:
## Patterns
- Database queries: always use Drizzle `eq()`, `and()`, `or()` helpers. Never use raw object syntax.
- API routes: always return `NextResponse.json()`, never `Response.json()`.
- Auth: use `auth()` from `@/lib/auth`, never import from `next-auth` directly.
## Do NOT
- Do not import from `@/utils`. That directory does not exist.
- Do not use `getServerSession()`. We use the `auth()` helper instead.
- Do not use CSS modules. We use Tailwind exclusively.
Every "do not" rule here addresses a specific hallucination that has happened before. Treat your rules file as a living record of the model's past mistakes.
The explicit negatives are critical. Models respond well to being told what not to do. When you write "do not use getServerSession()," the model reliably avoids it. Without that rule, it has no way of knowing your project uses a different pattern.
Strategy 3: Reference actual files, not descriptions
When you describe patterns in prose ("our API routes use try-catch with Zod validation"), the model interprets your description and generates what it thinks you mean. When you point it to an actual file, it copies the real pattern.
In your project instructions:
## Reference implementations
- API routes: follow the exact pattern in `src/app/api/users/route.ts`
- React components: follow `src/components/Button.tsx` for structure and exports
- Database queries: follow `src/db/queries/users.ts` for query builder usage
In your prompts, reference specific files rather than describing them:
Bad: "Create an API route similar to our other routes"
Better: "Create a new API route following the exact pattern in src/app/api/users/route.ts"
This anchors the model to real code that actually works in your project, rather than a plausible interpretation of your verbal description.
Strategy 4: Break work into small, verifiable chunks
Hallucination frequency increases with output length. A 10-line function is less likely to contain a hallucinated API call than a 200-line module. This is partly because longer outputs push earlier context out of the window, and partly because the model has more opportunities to drift.
Instead of "build the entire user management system," try:
- "Create the database schema for users in
src/db/schema/users.ts" - "Write the create-user query in
src/db/queries/users.tsfollowing the pattern in the existing queries" - "Create the POST handler in
src/app/api/users/route.tsusing the schema and query from steps 1-2"
Each step is small enough to review carefully. You catch hallucinations at step 1 before they cascade into steps 2 and 3.
This approach also works better with writing effective prompts for AI coding. Focused prompts produce focused output that's easier to verify.
Strategy 5: Verify with type checking and tests
Hallucinated code often fails type checking. If the model invents a function that doesn't exist or uses wrong parameter types, TypeScript will catch it immediately.
Make type checking part of your verification workflow:
# After accepting AI-generated code
npx tsc --noEmit
Similarly, running existing tests after accepting generated code catches hallucinations that break existing functionality:
# Run tests related to the files you changed
npx vitest run src/lib/auth.test.ts
For critical code paths, ask the model to generate tests alongside the implementation. If the model hallucinated an API, the test will often hallucinate the same wrong API, but the test will fail when you run it because the function doesn't actually exist.
Some Claude Code tips for this: configure your CLAUDE.md to include a rule like "always run tsc --noEmit after making changes" so type checking happens automatically.
Strategy 6: Use agent skills for shared guardrails
When multiple developers on a team encounter the same hallucinations, the fix shouldn't live in one person's local rules file. It should be shared.
Agent skills let you publish hallucination prevention rules once and distribute them to every developer on your team:
localskills install your-team/project-guardrails --target cursor claude windsurf
The skill installs in the right format for each tool. When someone discovers a new hallucination pattern, they add it to the skill and publish an update. The entire team gets the fix.
This is especially valuable for:
- Framework-specific traps: patterns the model consistently gets wrong for your specific framework version
- Internal API documentation: your custom utilities and their actual signatures
- Deprecated pattern blocklists: things the model suggests that your team has moved away from
Strategy 7: Audit hallucination-prone areas more carefully
Some types of code are more hallucination-prone than others. Focus your review attention on:
Third-party API integrations. The model frequently invents API endpoints, request formats, and response shapes. Always verify against the actual API documentation.
Configuration files. Build tool configs, CI/CD pipelines, and infrastructure-as-code templates are full of hallucinated options that look plausible but do nothing or cause subtle bugs.
Security-critical code. Authentication, authorization, encryption, and input validation are areas where a hallucinated function call can create real vulnerabilities. Never accept AI-generated security code without careful review.
Database migrations. A hallucinated column type or constraint can corrupt data. Always review migrations line by line.
Package.json scripts and dependencies. The model will add dependencies that don't exist or use the wrong package name (e.g., confusing @next/font with next/font).
Building a hallucination-resistant workflow
The techniques above work best when combined into a consistent workflow:
-
Start with strong project context. Your rules file should specify your stack, versions, reference files, and explicit "do not" rules for known hallucination patterns.
-
Prompt with precision. Reference specific files, provide input/output examples, and state constraints explicitly. Every ambiguity is an invitation for the model to guess.
-
Work in small increments. Generate code in focused chunks. Review each chunk before moving to the next.
-
Verify mechanically. Run type checking, linters, and tests after every generation. Let the toolchain catch what your eyes miss.
-
Record and share. When you find a new hallucination pattern, add it to your rules file. If you're on a team, publish it as a skill so everyone benefits.
-
Review with skepticism. Treat every import, every function call, and every config option as potentially fabricated until you've confirmed it exists. The code that "looks right" is exactly the code that slips through review.
Hallucinations are not going away. Models will continue to fill gaps with plausible patterns when they lack specific context. But the developers who invest in providing that context, through rules files, precise prompts, and shared skills, encounter hallucinations far less often and catch them far faster when they do appear.
Ready to share hallucination prevention rules across your team? Sign up at localskills.sh and publish your project guardrails as reusable skills.
npm install -g @localskills/cli
localskills login
localskills publish