60% of Your Coding Agent's Skill Content Is Non-Actionable — SkillReducer Cuts That Waste Without Touching Capability
An empirical study of 55,315 publicly available coding agent skills has found that over 60% of skill body content is non-actionable — boilerplate, prose commentary, and filler text the agent never actually uses. Another 26.4% of skills lack routing descriptions entirely, making them invisible to the agent router no matter how good the underlying instructions are. That's a lot of tokens paying rent for nothing. SkillReducer, introduced by Yudong Gao, Zongjie Li, and collaborators, addresses this through a two-stage framework: first optimizing the routing layer by compressing verbose descriptions and generating missing ones via adversarial delta debugging, then restructuring skill bodies using taxonomy-driven classification and progressive disclosure — loading only the actionable core at invocation and pulling supplementary content on demand. Evaluated on 600 skills, the system substantially reduces token footprint while preserving functional behavior.
The findings matter beyond any single tool. Whether your team uses AGENTS.md, CLAUDE.md, or a custom instruction set, the same structural problems apply at scale: skill content grows without discipline, routing metadata gets neglected, and the result is an attention-diluted agent that costs more and performs worse than it should. The 55,315-skill dataset makes this the largest empirical study of real-world skill quality to date. SkillReducer's progressive disclosure architecture and its taxonomy of actionable versus non-actionable content give any team a concrete audit checklist for their own context files — no framework required.