·10 min read

Measuring AI Skill Adoption Across Your Engineering Team

Use download analytics and adoption metrics to understand which AI coding rules your engineering team actually uses, which ones stick, and which need work.

Why measuring AI skill adoption matters

You've spent time writing AI coding rules. You've published them to your registry, added them to your team's onboarding docs, and pointed everyone to the CLI install command. A month later, you're left wondering: is anyone actually using these?

This is the silent failure mode of most AI tooling rollouts. Skills get published, people install them on day one, and then nothing. No feedback, no data, no way to know if the rules are actually shaping how your team writes code, or if they were forgotten the moment the terminal window closed.

Measuring adoption isn't about surveillance. It's about knowing whether the investment in writing and maintaining AI rules is paying off. If a skill has five installs from 40 engineers, that's a signal to investigate. Is the skill hard to install? Is it solving the wrong problem? Is the documentation unclear? Without data, you're guessing.

This post covers how to think about AI skill adoption metrics, what to track, and how localskills.sh's built-in analytics make this concrete for engineering teams.

The adoption funnel for AI skills

Before looking at specific metrics, it helps to think about adoption as a funnel:

  1. Awareness - Does the engineer know the skill exists?
  2. Installation - Did they run localskills install?
  3. Activation - Is the skill actually being used in their AI tool?
  4. Retention - Are they keeping it installed over time?
  5. Advocacy - Are they recommending it to others?

Most teams only ever measure step 2 (installation counts), if they measure anything at all. But installation without activation is meaningless, and a skill that engineers quietly uninstall after a week is worse than no skill at all, because it creates false confidence.

The good news: download and usage analytics can illuminate most of this funnel with surprisingly little instrumentation.

What download analytics actually tell you

Download counts are the most basic signal, and they're more informative than they look when you break them down correctly.

Total downloads vs. unique installs

A skill with 500 total downloads might have 50 unique engineers who installed it once, or 10 engineers who run localskills pull every day to get the latest version. These are very different adoption stories.

Total downloads tell you about engagement and update frequency. If engineers are pulling updates regularly, the skill is alive in their workflow.

Unique installs tell you about breadth. This is the number you compare against your team size to get a coverage percentage.

Raw totals hide the story over time. What you want is:

MetricWhat it signals
Downloads last 7 daysCurrent momentum, recent campaigns
Downloads last 30 daysSustained adoption trend
Downloads this quarterBusiness-level reporting
Week-over-week changeGrowth or decline

A skill with 100 total downloads and 80 in the last 7 days is gaining momentum. A skill with 100 total downloads and 2 in the last 30 days is in decline, maybe a newer skill replaced it, or the team moved to a different tool.

Install sources

Where are downloads coming from?

  • CLI installs - engineers running localskills install directly, the highest-intent signal
  • Web downloads - someone browsed to your registry page and clicked download, often exploratory
  • API calls - programmatic installs, usually CI/CD pipelines or onboarding automation
  • Team package pulls - part of a team package install

CLI installs from multiple engineers on the same team suggest organic word-of-mouth adoption, where someone saw it working and told a teammate. API installs suggest the skill has been baked into infrastructure, which is the highest maturity level.

Identifying underused skills

Once you have download data, the interesting work is finding the gaps.

Coverage rate

Take your unique installs and divide by your team size. A coverage rate below 50% on a "required" team skill means onboarding isn't working, or the skill isn't actually required - it's just aspirational.

Coverage rate = unique installs / team members with access

Example: 12 installs / 40 engineers = 30% coverage

A 30% coverage rate on a coding standards skill should trigger action. Not punishment - investigation.

The adoption cliff

Most skill adoption follows a predictable pattern: a spike when the skill launches, a plateau, then a slow decline as people move on. The "adoption cliff" is when a skill drops to near-zero downloads after the initial push.

Skills that fall off a cliff are usually:

  • Solving a problem that doesn't recur often
  • Installed but never used (the skill content isn't connected to real workflow)
  • Superseded by a better version that was never published to the registry

Watch 30-day trends against your 7-day trends. If a skill had strong early downloads but the ratio is dropping, you have a retention problem.

Version adoption lag

Versioning lets you track how long it takes your team to upgrade. If 80% of your installs are still on v1.2 and you published v2.0 three weeks ago with meaningful improvements, that's a signal that:

  • Engineers aren't running localskills pull regularly
  • Your update notification isn't reaching people
  • There might be a breaking change causing friction

Good version adoption is a leading indicator of team trust. If engineers upgrade quickly, it means they've had positive experiences with your updates. If they lag, something broke that trust.

Improving adoption: what the data tells you to do

Analytics without action is just numbers. Here's how to translate the signals into improvements.

Low coverage rate: fix the onboarding path

If coverage is below 50%, the install step has friction. Check:

  • Is the skill in your team's onboarding checklist?
  • Is the install command one line, or does it require setup?
  • Does the skill require authentication that new engineers don't have?

The highest-impact fix is usually automation. Add the install command to your team's bootstrap script, and coverage will normalize within one onboarding cycle.

# Add to your team's setup script
npm install -g @localskills/cli
localskills install your-org/coding-standards --target cursor claude windsurf

High installs, low engagement: the content isn't landing

If engineers install a skill but don't seem to be following it (you notice the issues it was supposed to prevent still appearing in PRs), the skill content might need work. This is harder to measure directly, but signals include:

  • Engineers who ask questions the skill was designed to answer
  • Code review comments pointing out the same patterns the skill addresses
  • Low version adoption (they installed once and stopped pulling updates)

Revisit the skill content. Are the rules specific enough? Do they include examples? Read what makes agent skills actually useful for a framework on writing rules that stick.

Strong CLI adoption, zero API adoption: no automation yet

If you see consistent CLI installs but nothing from the API or CI/CD, your skills haven't been baked into infrastructure. This is an opportunity. Skills that are installed automatically, as part of repo setup, Docker images, or onboarding scripts, have higher retention because they don't depend on engineer initiative.

Consider adding localskills to your dev container setup or repo initialization scripts. Once it's automated, coverage becomes a property of your infrastructure rather than individual behavior.

Declining downloads: refresh or retire

Skills decay. The conventions they encode go stale, the tools they target get updated, or better alternatives emerge. A skill with consistently declining downloads over 60+ days is telling you something.

Don't just ignore it. Either:

  1. Update and republish - bump the version with meaningful improvements and announce it to your team
  2. Deprecate explicitly - mark it deprecated in the registry so engineers know to look elsewhere
  3. Consolidate - merge it into a broader skill that covers more ground

A registry full of stale, low-download skills creates noise and erodes trust. Regular pruning is maintenance, not failure.

Privacy-respecting analytics

One important note on how tracking works: the best analytics for developer tools are aggregate and anonymous.

Engineers are (rightly) sensitive about surveillance. The goal of skill analytics isn't to know which individual engineer hasn't installed the linting rules - it's to know that 60% of the team is missing a skill and to fix that systemically.

localskills.sh tracks:

  • Download counts per skill (total and time-windowed)
  • Install sources (CLI, web, API)
  • Aggregate team coverage for org-owned skills

What it doesn't expose in dashboards: individual download activity by engineer. IPs are hashed immediately and not stored in identifiable form, so there's no way to tie anonymous downloads back to a person or location. The analytics exist to help skill authors improve their content and help team leads understand coverage - not to create individual surveillance.

This is the design principle: metrics that make skills better, not metrics that police engineers.

Setting up team analytics in practice

Here's a practical workflow for an engineering lead wanting to establish AI skill adoption metrics.

Step 1: Audit your published skills

List everything your team has published. For each skill, note its purpose, the intended audience, and the tool targets.

Step 2: Set a coverage baseline

For each "required" skill, note the current unique install count and compare it to team size. This is your starting coverage rate.

Step 3: Establish a cadence

Check skill analytics monthly as part of a regular tooling review. Look for:

  • Skills with declining 30-day trends
  • Significant gaps between total downloads and recent downloads
  • Version adoption lag over 3+ weeks

Step 4: Act on the data

For each skill below your target coverage rate, pick one intervention: update the content, improve the docs, add it to automation, or retire it.

Step 5: Close the loop

After interventions, watch whether the metrics move. If updating a skill's content leads to a bump in downloads, that's evidence the content was the bottleneck. If automation brings coverage to 95%, that's a pattern to replicate for other skills.

Connecting analytics to engineering outcomes

The deepest value of skill analytics isn't the numbers themselves - it's what they tell you about whether your AI tooling investment is translating to actual engineering outcomes.

A team where 90% of engineers have the testing conventions skill installed, and that skill is on the latest version, is more likely to write consistent tests than a team with 20% coverage. That consistency shows up in code review, in onboarding time for new engineers, and in the time spent answering "how do we do X here?" questions.

If you're already sharing AI rules with remote teams, analytics are how you know the sharing is working. If you're publishing your first skill and want to understand impact, download trends are where you start. And if you're building toward team AI coding standards, adoption metrics are the feedback loop that makes continuous improvement possible.

The difference between a skill registry that shapes how your team works and a registry that no one uses is mostly data. Teams that measure, iterate, and close the loop win.


Ready to start measuring how your team uses AI skills? Sign up for localskills.sh and get access to built-in analytics for every skill you publish.

npm install -g @localskills/cli
localskills login
localskills publish