Audit Logging for AI Coding Rules: Track Every Change
Know exactly who changed what, when, and why in your team's AI coding rules, with built-in audit logging and 90-day retention for compliance.
Why audit logging matters for AI coding rules
When a team of 20 engineers all share the same AI coding rules, changes have real consequences. A rule that tells Claude Code to skip input validation in API routes, or a skill that disables security checks to "speed up" code generation -- these are not hypothetical risks. They happen.
In regulated industries -- fintech, healthcare, government contracting -- you don't just need to prevent bad changes. You need to prove they didn't happen, or prove when they did and who made them. That's exactly what audit logging is for.
Audit logging for AI rules is a newer concept. Most teams are still managing their Cursor rules and Claude Code skills through raw git commits with no metadata beyond the commit author. That works for small teams. It breaks down when you need to answer questions like:
- Who changed the security scanning rule last Tuesday?
- Was this skill update reviewed before it was pushed to production?
- What was the state of our AI rules during the Q3 compliance window?
This post covers what a proper audit log looks like for AI coding rules, what events matter, and how to use audit logs to satisfy real compliance requirements.
What gets logged
A complete audit log for AI rules captures three categories of events.
Skill and rule changes
Every mutation to a published skill or rule set should produce an audit entry:
| Event | What it captures |
|---|---|
skill.created | New skill published, initial version, author |
skill.updated | Version bump, diff summary, who approved |
skill.deleted | Removal, reason if provided, who authorized |
skill.rolled_back | Target version, previous version, requester |
skill.visibility_changed | Public to private or vice versa |
The diff is particularly important. Knowing that a skill changed on February 12th is useful. Knowing what changed -- that three lines were added allowing the AI to skip rate limiting in tests -- is what actually lets you do a root cause analysis.
Team and access changes
Who has access to your AI rules is as important as what those rules say:
| Event | What it captures |
|---|---|
team.member_added | New member, role assigned, invited by |
team.member_removed | Removed member, skills they had access to |
team.role_changed | Old role, new role, changed by |
team.token_created | API token scopes, expiry, created by |
team.token_revoked | Token ID, revocation reason, revoked by |
If a contractor was granted write access to your AI rules and something changed on their last day, you need that event in the audit log.
Identity and authentication events
For teams using SSO/SAML or SCIM directory sync, authentication events complete the picture:
| Event | What it captures |
|---|---|
auth.sso_login | User, IdP, timestamp, IP (hashed) |
auth.sso_failed | User, failure reason, IP (hashed) |
scim.user_provisioned | New user from directory, groups assigned |
scim.user_deprovisioned | User removed via directory, access revoked |
scim.group_sync | Group membership changes that affect skill access |
When you deprovision a user through your identity provider, the SCIM event should trigger immediate access revocation -- and that revocation should appear in the audit log within seconds.
90-day retention and why it matters
Most compliance frameworks specify a minimum retention window for audit logs. SOC 2 does not mandate a specific retention period, but auditors typically expect at least one year of evidence for a Type II audit. PCI DSS requires 12 months, with three months immediately available. HIPAA requires documentation -- including audit logs -- to be retained for at least six years under § 164.316(b)(2)(i).
90-day retention covers the most common compliance scenarios:
- Security incident investigation: Most breaches are discovered weeks or months after they occur. A 90-day window lets you reconstruct what your AI rules looked like during and before the incident.
- Audit periods: Quarterly audits are the norm. 90 days of logs covers a full quarter.
- Change management reviews: Many change management processes require retrospective review of all changes in the past 30 to 90 days.
For teams with stricter requirements, exporting logs to your own SIEM or long-term storage closes the gap. The audit log API lets you stream events to Splunk, Datadog, or S3 for indefinite retention.
Filtering and searching audit logs
A raw dump of every event across a 90-day window for a 50-person team is tens of thousands of rows. The value is in being able to find the specific event you need quickly.
Filter audit logs by event type:
# Show all skill changes in the last 30 days
actor: any
event: skill.*
after: 2026-01-19
before: 2026-02-19
# Show all changes made by a specific user
actor: user:sarah@yourcompany.com
event: *
after: 2026-01-01
# Show all team access changes
actor: any
event: team.*
resource: team:your-team-slug
For compliance investigations, the most useful queries combine actor, event type, and time range. "Show me everything Sarah did to AI rules in January" is a common request during access reviews.
Practical compliance scenarios
SOC 2 Type II -- change management controls
SOC 2 auditors look at change management controls to verify that changes to production systems are reviewed and approved. AI coding rules increasingly fall into scope because they directly influence the code your AI writes -- code that ends up in production.
For SOC 2, you'll typically need to demonstrate:
- All changes are tracked: Every skill update appears in the audit log with author and timestamp.
- Changes are reviewed: Role separation -- writers propose changes, approvers merge them -- with both actions logged.
- Changes can be rolled back: Rollback events appear in the audit log with the rollback requester.
See also: AI rules version control for how versioning and rollbacks work alongside audit logging.
ISO 27001 -- access control
ISO 27001:2022's access control requirements (A.5 and A.8, formerly A.9 in the 2013 edition) mandate that user access rights are reviewed regularly and revoked promptly when no longer needed. For AI rules, this means:
- Provisioning events should be logged when developers join a team and gain access to skills.
- Deprovisioning events should be logged, and access should be revoked, when they leave.
- Regular access reviews should be possible by exporting team membership logs for a given period.
SCIM directory sync automates the deprovisioning side of this. When someone's account is disabled in your IdP, SCIM triggers deprovisioning and the audit log captures the event automatically.
HIPAA -- audit controls
HIPAA's audit controls (§ 164.312(b)) require that covered entities implement hardware, software, and procedural mechanisms to record and examine access and activity in systems that contain ePHI.
If your development team uses AI coding rules for code that handles patient data, those rules are a link in the ePHI chain. Being able to answer "what instructions was our AI operating under when it generated the data access layer?" is a legitimate audit concern.
Internal policy enforcement
Compliance isn't only about external frameworks. Teams often have internal policies:
- "All changes to AI rules require approval from the security team."
- "No rules may be updated during a code freeze."
- "Contractors may only read rules, not modify them."
Audit logs let you verify these policies are being followed and investigate when they aren't.
Using audit logs for incident response
The scenario: a security researcher reports that your application is missing input validation on a public endpoint. The vulnerable code was generated by your AI assistant. Your immediate questions:
- What was the relevant AI rule at the time the code was written?
- When was that rule last changed?
- Who changed it?
- Was that change reviewed?
With a complete audit log, you can answer all four in minutes. Without one, you're reconstructing from git history, commit messages, and Slack threads -- if you can find them.
The audit log gives you a timeline:
2026-01-15 09:23 UTC skill.updated api-conventions sarah@co v1.4 to v1.5
2026-01-15 09:45 UTC skill.updated api-conventions sarah@co v1.5 to v1.6
2026-01-20 14:12 UTC skill.updated api-conventions mike@co v1.6 to v1.7
You can correlate this timeline with your git history and deployment log to narrow down exactly which version of your rules was active when the vulnerable code was generated.
Connecting audit logs to your existing tools
Audit logs are most useful when they flow into the tools your security and compliance teams already use.
SIEM integration
Most SIEMs can ingest structured JSON events via webhook or API. The audit log API returns standard JSON:
{
"id": "evt_01HXYZ...",
"timestamp": "2026-02-19T14:32:00Z",
"actor": {
"type": "user",
"id": "usr_...",
"email": "sarah@yourcompany.com"
},
"event": "skill.updated",
"resource": {
"type": "skill",
"id": "skl_...",
"name": "api-conventions",
"team": "your-team"
},
"metadata": {
"from_version": "1.4.0",
"to_version": "1.5.0",
"change_summary": "Added rate limiting instructions"
}
}
Compliance reporting
For periodic compliance reviews, you can export the full audit log for a date range as CSV or JSON and include it in your compliance documentation. The export includes all fields -- actor, event, resource, timestamp, and metadata -- suitable for attaching to a SOC 2 audit package.
What a good audit trail looks like in practice
A common mistake teams make is treating audit logging as a checkbox: they turn it on, confirm it's running, and never look at it again. The teams that get real value from audit logging treat it like a live feed.
Here's what that looks like in practice. At the start of each sprint, the security lead runs a query for all skill changes from the past two weeks. They scan the list for unexpected changes: edits made outside of the normal review window, changes by accounts that shouldn't have write access, or rollbacks that weren't preceded by a reported issue. Most of the time, everything checks out. Occasionally, something surfaces that warrants a follow-up conversation.
That review process takes about five minutes. It doesn't require custom tooling. It uses the audit log export and a set of saved filter queries.
The same pattern applies during an access review. Instead of manually checking who has access to each skill, the compliance team exports the provisioning and deprovisioning events for the quarter. They cross-reference against the HR system's termination log to verify that every departed employee's access was revoked within the required window. SCIM makes this automatic; the audit log makes it verifiable.
Setting up audit logging for your team
If your team is already managing AI coding rules for CI/CD workflows, adding audit logging is a natural next step. The prerequisite is having your skills managed through a central registry -- that's what enables centralized event tracking in the first place.
When skills live in individual git repositories or local .cursor/rules/ folders, there's no single place to hook into for auditing. The moment you move to a shared registry, every change goes through one system and can be logged consistently.
For SSO/SAML teams, authentication events integrate automatically -- no additional configuration needed. For SCIM-provisioned teams, provisioning and deprovisioning events appear in the audit log as soon as SCIM is configured.
Ready to add audit logging to your team's AI coding rules? Get started on localskills.sh -- audit logging is included on all team plans with 90-day retention out of the box.
npm install -g @localskills/cli
localskills login
localskills publish