Rules, Skills, and Workflows: How to Actually Control What Your AI Coding Agent Does in Google DeepMind's AntiGravity
Google DeepMind's AntiGravity — their VS Code fork built for agentic coding — ships a three-primitive system for programmable agent behavior that most teams are currently cobbling together from scattered documentation and system prompts. The `.agent/` folder exposes Rules (what the agent must always follow, or can decide to apply based on context), Skills (structured definitions of how to use specific tools in specific scenarios), and Workflows (chained command sequences the agent executes with automatic parallelism for steps marked as independent). The distinctions between them are architectural, not cosmetic: rules that run on every context load consume tokens regardless of relevance, while model_decision rules only load when the agent judges them applicable.
The practical engineering insight the guide surfaces is immediately applicable to any agent system with persistent context files — Cursor's AGENTS.md, Claude Code's CLAUDE.md, Windsurf rules — not just AntiGravity. The always_on vs model_decision framing is the best public articulation yet of a problem every team hits when their context files grow: over-constraining with always_on rules degrades agent behavior by overloading context, while under-constraining leaves the agent improvising in ways that produce inconsistent results. The Skills primitive formalizes something most teams currently do as plain text: "for this kind of task, use this tool in this way." Making that machine-readable with structured frontmatter is a concrete upgrade over documentation that agents interpret at inconsistent levels of fidelity.