·10 min read

Security Best Practices for AI-Generated Code

AI coding assistants introduce real security risks. Learn how to prevent SQL injection, XSS, secrets leaking, and other vulnerabilities in AI-generated code.

AI assistants write insecure code by default

AI coding tools are optimized for helpfulness. They generate code that works, compiles, and looks reasonable. What they don't optimize for is security. Without explicit guidance, AI assistants will:

  • Concatenate user input directly into SQL queries
  • Render unsanitized HTML from user-submitted data
  • Hardcode API keys and secrets inline
  • Pull in outdated dependencies with known CVEs
  • Skip input validation entirely to keep examples short

These aren't hypothetical risks. They're the default output of every major AI coding tool when given a generic prompt like "build a login form" or "add a search endpoint." The code works, but it's vulnerable.

The fix isn't to stop using AI assistants. It's to give them explicit security constraints that match how your team handles these concerns. This guide covers the most common vulnerability categories, how AI tools introduce them, and the rules you can write to prevent them.

OWASP Top 10 through the lens of AI code generation

The OWASP Top 10 lists the most critical web application security risks. AI coding tools are particularly prone to introducing several of them.

A01: Broken access control

AI assistants frequently generate API routes that skip authorization checks. Ask for a "delete user endpoint" and you'll get a handler that deletes the user, with no check that the caller has permission.

Write rules that force authorization into every route:

## Access control

Every API route that modifies data MUST verify authorization before proceeding.

Pattern:
1. Authenticate the request (verify session/token)
2. Check the caller's role and permissions against the resource
3. Return 403 if unauthorized
4. Only then execute the operation

Never assume a valid session implies permission. A logged-in user cannot
delete another user's data just because they have a session token.

A04: Cryptographic failures

AI tools default to weak or outdated cryptographic choices. They'll suggest MD5 for hashing, skip HTTPS enforcement, or store passwords in plaintext "for now."

## Cryptography

- Hash passwords with bcrypt or argon2. Never MD5 or SHA-256 for passwords.
- Use crypto.randomUUID() or crypto.getRandomValues() for tokens. Never Math.random().
- All external communication must use HTTPS. No HTTP fallbacks.
- Encryption keys come from environment variables, never from source code.

A05: Injection

This is the big one. AI-generated code is especially vulnerable to injection attacks because the tools prioritize readability and simplicity over parameterized queries. More on this in the dedicated section below.

A07: Authentication failures

AI assistants will generate authentication flows that skip critical steps: no rate limiting on login attempts, session tokens that never expire, password reset links without expiration or single-use enforcement.

## Authentication

- Rate-limit login attempts: max 5 per minute per IP, max 20 per hour per account
- Session tokens expire after 24 hours of inactivity
- Password reset tokens are single-use and expire after 1 hour
- Never include session tokens in URLs or query parameters
- Always invalidate all sessions when a user changes their password

A09: Security logging and alerting failures

AI-generated code almost never includes security logging. It won't log failed authentication attempts, access control violations, or input validation failures unless you tell it to.

## Security logging

Log these events with severity level and request context:
- Failed authentication attempts (WARN)
- Access control failures (WARN)
- Input validation failures (INFO)
- Rate limit hits (WARN)
- Any catch block in API routes (ERROR)

Never log passwords, tokens, credit card numbers, or PII in plain text.

SQL injection: the most common AI-generated vulnerability

SQL injection tops the list of security problems in AI-generated code. The reason is simple: string interpolation looks cleaner than parameterized queries, and AI tools optimize for clean-looking code.

Here's what AI tools commonly generate when asked to build a search endpoint:

// VULNERABLE - AI-generated code often looks like this
const results = await db.execute(
  \`SELECT * FROM products WHERE name LIKE '%\${query}%'\`
);

A user who submits '; DROP TABLE products; -- as the query destroys your database.

The fix is always parameterized queries. Your rules should make this non-negotiable:

## SQL and database queries

NEVER concatenate or interpolate user input into SQL strings. Always use
parameterized queries.

Bad (vulnerable to SQL injection):
  db.execute(\`SELECT * FROM users WHERE id = '\${userId}'\`)

Good (parameterized):
  db.execute('SELECT * FROM users WHERE id = ?', [userId])

If using an ORM like Drizzle or Prisma, use the ORM's query builder
methods. Do not drop down to raw SQL unless absolutely necessary,
and if you do, always parameterize.

This is a rule where being specific with examples makes all the difference. AI tools follow patterns they can see. Show the wrong way and the right way side by side, and the assistant reliably picks the correct approach.

Cross-site scripting (XSS) prevention

XSS is the second most common vulnerability in AI-generated frontend code. AI tools will happily render user-supplied HTML without sanitization, especially when asked to build features like comments, user profiles, or rich text displays.

The dangerous patterns:

// VULNERABLE - renders raw HTML from user input
<div dangerouslySetInnerHTML={{ __html: userComment }} />

// VULNERABLE - injects user input into href without validation
<a href={userProvidedUrl}>Click here</a>

Write rules that address both rendering and URL handling:

## XSS prevention

- Never use dangerouslySetInnerHTML with user-provided content
- If rendering rich text is required, use DOMPurify to sanitize first
- Validate all user-provided URLs: they must start with https:// or /
  Reject javascript:, data:, and vbscript: URLs
- Escape user input in HTML attributes using the framework's built-in
  escaping (React handles this by default for JSX expressions, but
  dangerouslySetInnerHTML bypasses it)
- Set Content-Security-Policy headers that block inline scripts

Secrets and credentials: the silent leak

AI assistants love putting secrets directly in code. Ask for "a function that sends email via SendGrid" and you'll get a file with an API key hardcoded on line 3. This is dangerous in two ways:

  1. The secret ends up in version control when the file is committed
  2. The AI assistant may log or display the secret in conversation history

Your rules need to cover both the code and the workflow:

## Secrets management

- NEVER hardcode API keys, tokens, passwords, or connection strings in source code
- All secrets come from environment variables: process.env.SECRET_NAME
- Never commit .env files. The .gitignore must include .env*
- When generating example code with secrets, always use placeholder values:
  API_KEY=your-api-key-here
- Never log environment variables or secrets, even in debug mode
- If a secret is accidentally committed, rotate it immediately.
  Removing it from git history is not sufficient.

This is one of the most important rules to enforce across your team's AI coding standards. A single leaked API key in a public repository can lead to unauthorized access within minutes. Automated scanners constantly crawl GitHub for exposed credentials.

Dependency risks in AI-generated code

AI assistants suggest packages confidently, but they have no concept of supply chain risk. Common problems include:

Outdated packages with known vulnerabilities. The AI's training data includes popular packages from years ago. It might suggest lodash@3.x when lodash@4.x has been out for years, or recommend packages that have been deprecated entirely.

Typosquatting risk. If the AI hallucinates a package name (like express-validator-utils instead of express-validator), and someone has registered that typosquatted name with malicious code, your project is compromised.

Unnecessary dependencies. AI tools add packages for things the standard library already handles. Every additional dependency is another potential attack surface.

Write rules that control dependency decisions:

## Dependencies

- Do not install new packages without explicit approval from the developer
- Before suggesting a package, check if the standard library or existing
  project dependencies already solve the problem
- Never install packages for simple operations:
  - No lodash for basic array/object operations
  - No axios when fetch is available
  - No moment.js (use date-fns or Temporal)
- When a new dependency is necessary, verify:
  - It's actively maintained (commits in the last 6 months)
  - It has no known critical vulnerabilities
  - The package name is correct (double-check for typosquatting)

Building security rules as reusable skills

Individual rules in project files work, but they create a maintenance problem. If your team maintains 15 repositories, updating a security rule means editing 15 files across 15 repos. Rules drift. Some repos get the update, others don't.

The better approach is to treat security rules as shared, versioned packages. Write them once, publish them, and install them everywhere:

# Publish your security rules as a skill
localskills publish --team my-team --name security-standards

# Install in any project, for any tool
localskills install my-team/security-standards --target cursor claude windsurf

This is the same pattern described in what agent skills are. Your security rules become a dependency, just like any other package. When you discover a new vulnerability pattern or update a security practice, bump the version and every project gets the update on next install.

A well-structured security skill might look like this:

# Security Standards v2.1

## Input validation
- Validate all user input at the API boundary using Zod schemas
- Never trust client-side validation alone
- Reject requests that fail validation with 400 and a descriptive error

## SQL injection prevention
- Always use parameterized queries
- Never interpolate variables into SQL strings

## XSS prevention
- Never use dangerouslySetInnerHTML with user content
- Sanitize with DOMPurify if rich text rendering is required
- Validate all URLs: must be https:// or relative paths

## Authentication and authorization
- Every mutating endpoint checks authorization
- Rate-limit authentication endpoints
- Sessions expire after 24 hours of inactivity

## Secrets
- All secrets come from environment variables
- Never hardcode credentials
- .env files are gitignored

## Dependencies
- No new packages without developer approval
- Prefer standard library over third-party packages

Practical steps to secure your AI coding workflow

Here's a prioritized checklist for teams that want to reduce security risk in AI-generated code without slowing down:

Week 1: Write the basics. Create rules covering SQL injection, XSS, secrets management, and input validation. These four categories cover the majority of real-world vulnerabilities. Follow the patterns in AI code generation best practices to make your rules as effective as possible.

Week 2: Add authentication and access control rules. Specify your exact auth patterns, session management approach, and role-based access control model. Be explicit about what every API route must check before executing.

Week 3: Lock down dependencies. Add rules about package approval workflows. Run npm audit or pnpm audit in CI to catch known vulnerabilities automatically.

Week 4: Publish and share. Package your security rules as a shared skill and install them across all repositories. Set up a quarterly review to keep them current.

Security rules are not optional

AI coding tools are force multipliers. They make your team faster. But they also multiply mistakes if the underlying context is wrong. An AI assistant without security rules will generate the same SQL injection vulnerability a hundred times faster than a junior developer typing it manually.

The investment in writing clear, specific security rules pays for itself the first time it prevents a vulnerability from reaching production. These rules don't just protect your code. They train the AI to generate secure code by default, across every project and every developer on your team.


Stop shipping AI-generated vulnerabilities. Create a free account on localskills.sh and publish your team's security rules as a shared skill.

npm install -g @localskills/cli
localskills login
localskills publish --name security-standards