How to Write Better Prompts for AI Code Generation
Practical prompt engineering techniques for developers who want consistently better output from AI coding tools. Before-and-after examples included.
The gap between developers who get great output from AI coding tools and developers who spend half their day correcting generated code comes down to one thing: how they write prompts.
Not "prompt engineering" in the abstract, hand-wavy sense. Concrete, mechanical differences in how requests are structured. Differences you can learn in an afternoon and benefit from permanently.
This guide covers the specific techniques that produce better AI-generated code, with before-and-after examples so you can see exactly what changes and why it matters.
The problem with vague prompts
Most developers prompt AI the way they'd describe a task to a senior colleague who already knows the codebase. Something like:
"Fix the login bug"
or:
"Add pagination to the API"
A senior colleague would know which login bug, where the API lives, what pagination style you use, and a dozen other implicit details. The AI knows none of that. It fills the gaps with assumptions -- and those assumptions are wrong often enough to cost you time.
The fix is specificity. Not verbosity. Specificity. You don't need to write a paragraph. You need to include the right details.
Before and after: five prompt rewrites
These examples show what specificity looks like in practice. Each pair starts with a prompt most developers would write naturally, followed by a rewrite that consistently produces better output.
1. Bug fix
Before:
Fix the authentication error.
After:
In
src/app/api/auth/callback/route.ts, the Google OAuth callback throws "invalid_grant" when the user's token has expired. TheexchangeCode()function on line 42 doesn't handle expired authorization codes. Add a try/catch that returns a redirect to/login?error=expiredinstead of a 500.
Why it works: The rewrite names the exact file, the exact function, the exact error, and the exact desired behavior. The AI doesn't need to search the codebase, guess which auth system you use, or decide on an error handling strategy. It has everything it needs to write the fix on the first try.
2. New feature
Before:
Add search to the dashboard.
After:
Add a search bar to the top of
src/app/dashboard/page.tsxthat filters the skills list by name and description. Use the existingInputcomponent fromsrc/components/ui/input.tsx. Filter client-side for now -- the dataset is small enough. Debounce the input by 300ms using auseEffectwith a timeout. Keep the search term in a URL search param so it survives page refreshes.
Why it works: The rewrite specifies where the UI goes, which components to use, the filtering strategy (client-side, not API-based), the debounce behavior, and the persistence mechanism. Without these details, the AI would make its own choices for each -- and you'd spend time correcting at least two of them.
3. Refactoring
Before:
Clean up the user service.
After:
Refactor
src/services/user-service.tsto split into three files:src/services/user/queries.tsfor database reads,src/services/user/mutations.tsfor writes, andsrc/services/user/types.tsfor shared TypeScript interfaces. Keep the same function signatures. Update all imports across the codebase to point to the new paths.
Why it works: "Clean up" is subjective. The AI might add comments, rename variables, extract utilities, or restructure the file. The rewrite describes a specific structural change with a clear output: three files with defined purposes and preserved function signatures.
4. Test writing
Before:
Write tests for the billing module.
After:
Write Vitest tests for
src/lib/billing/calculate-usage.ts. Test these cases: (1) a user with zero usage returns $0, (2) usage under the free tier limit of 1000 returns $0, (3) usage of 1500 charges $0.01 per unit over 1000, so $5.00, (4) usage with a 20% team discount applies after the overage calculation. Mock thegetTeamDiscount()function fromsrc/lib/billing/discounts.ts.
Why it works: The rewrite specifies the test framework, the exact file, specific test cases with expected values, the business logic rules that determine correctness, and which dependencies to mock. The AI can write all four tests without asking a single clarifying question.
5. API endpoint
Before:
Create an endpoint for user settings.
After:
Create a PATCH endpoint at
src/app/api/users/[id]/settings/route.ts. Accept a JSON body with optional fields:{ displayName?: string, emailNotifications?: boolean, timezone?: string }. Validate with Zod. Return the updated settings object. Only allow users to update their own settings -- compareparams.idwith the session user ID and return 403 if they don't match.
Why it works: The rewrite covers the HTTP method, route path, request shape, validation approach, response shape, and authorization rule. Each of these would otherwise be a coin flip by the AI.
The anatomy of a good prompt
Looking across those examples, every good prompt answers the same set of questions:
- Where? Which file(s) to modify or create
- What? The specific change or feature
- How? Which patterns, libraries, or components to use
- Constraints? What not to do, what to preserve
- Expected result? What "done" looks like
You don't need all five for every prompt. A simple rename only needs "where" and "what." But for anything beyond trivial changes, covering three or four of these eliminates the most common failure modes.
Give the AI reference code, not just instructions
Prose descriptions of patterns are helpful. Actual code examples are better.
When you want the AI to follow a specific pattern, point it to an existing implementation:
Create a new API route at
src/app/api/teams/route.ts. Follow the same pattern assrc/app/api/users/route.ts-- same error handling, same response shape, same Zod validation approach.
The AI reads the reference file first, then replicates its structure. This works better than describing the pattern in words because code captures nuances that prose misses: import order, error message formatting, which edge cases are handled, how responses are structured.
This approach pairs well with .cursorrules examples that codify your patterns as reusable instructions. Once a pattern is documented in your rules file, every prompt benefits from it automatically.
Use constraints to prevent over-engineering
AI tools tend to generate more code than necessary. They add error handling for impossible cases, create abstractions for one-time operations, and "improve" things you didn't ask them to touch.
Prevent this with explicit constraints:
Refactor the notification service. Do NOT:
- Change any function signatures
- Add new dependencies
- Modify the test files
- Refactor anything outside
src/services/notifications/
The "do not" list is as important as the "do" instruction. Without it, you'll get a diff three times larger than necessary, with changes scattered across files you weren't working on.
This principle applies at the project level too. Your project instructions file (CLAUDE.md, .cursorrules, etc.) should include a permanent "do not" section. See our AI pair programming guide for more on structuring these persistent instructions.
Break complex tasks into steps
The quality of AI output degrades as request complexity increases. A prompt that asks for five things at once is worse than five sequential prompts that each ask for one thing.
Compare:
Single complex prompt:
Build a user settings page with a form for display name, email preferences, timezone selector, password change, and two-factor auth setup. Include validation, error states, success toasts, and loading indicators for each section.
Sequential prompts:
Prompt 1: Create the settings page layout at
src/app/settings/page.tsxwith four sections: Profile, Notifications, Security, and a sidebar navigation between them. Use the existingCardcomponent for each section. No form logic yet -- just the layout.Prompt 2: Add the Profile section form. Fields: displayName (text, required, max 50 chars) and timezone (select dropdown, use the list from
src/lib/timezones.ts). Use react-hook-form with Zod validation. Show inline errors.Prompt 3: ...
Each sequential prompt produces a focused, reviewable change. You can catch problems early instead of unpacking a monolithic output.
Layer your context: project instructions + prompt
The best prompts work with your project instructions, not instead of them. If your project-level instructions already specify "use Zod for validation" and "no default exports," your prompt doesn't need to repeat those constraints. It just needs the task-specific details.
Think of it as two layers:
- Project instructions handle what's always true: your stack, conventions, patterns, prohibitions
- Your prompt handles what's true right now: the specific feature, bug, or refactor
This layering keeps prompts short. Instead of a ten-line prompt that includes three lines of project conventions, you write a seven-line prompt that's pure task context. The conventions are already loaded.
If you don't have project instructions set up yet, start there. A good instructions file does more for output quality than any individual prompting technique. Our guide to Claude Code tips covers how to structure these files effectively.
Share what works across your team
Once you've dialed in your prompting patterns, the next step is making sure your whole team benefits. When five developers use five different prompting styles with the same AI tool, output quality varies wildly across the team.
Two approaches work:
1. Codify patterns in project instructions. Add your prompting discoveries to your rules file. If you learned that "always specify the HTTP method and response shape" produces better API routes, add that as a convention: "When generating API routes, always include the HTTP method, Zod schema for the body, and the response type."
2. Publish shared skills. For conventions that apply across multiple projects, publish them as skills on localskills.sh. Your team installs them once and gets consistent AI behavior everywhere:
localskills install your-team/prompting-conventions --target cursor claude windsurf
When someone on the team discovers a new technique, update the skill and everyone benefits on their next pull.
Quick reference: prompt checklist
Before sending your next prompt, scan this list:
- Named the specific file(s)? Not "the auth module" but
src/lib/auth.ts - Specified the approach? Which libraries, components, or patterns to use
- Included constraints? What not to change, what to preserve
- Provided examples? For non-trivial transformations, at least one input/output pair
- Scoped the request? One focused task, not five things bundled together
- Checked project instructions? Don't repeat what's already in your rules file
The first three items catch 80% of prompt quality issues. If you only improve one habit, make it this: name the file before describing the change.
It compounds over time
Better prompts aren't just about getting one good answer. They're about building a feedback loop. Every time you notice what made a prompt succeed or fail, you're calibrating your instincts for what the AI needs to hear.
Over time, this calibration becomes automatic. You stop writing vague prompts because you've internalized what details the AI needs. You start scoping requests naturally because you've experienced the quality difference between focused and unfocused prompts.
And when you capture those instincts as project instructions and shared skills, your whole team benefits from what you've learned -- not just in one project, but across everything you work on.
Ready to turn your prompting expertise into reusable, shared conventions? Sign up at localskills.sh to publish your first skill and level up your entire team's AI workflow.
npm install -g @localskills/cli
localskills login
localskills publish