Your AI agent can post tweets, send emails, and deploy code. But should it?
Traditional permission systems were built for humans. Humans who take coffee breaks. Humans who read error messages. Humans who can be fired if they go rogue.
AI agents are different. They operate 24/7, execute thousands of actions per hour, and can’t be reasoned with when they exceed their bounds.
We need a new model.
The Mismatch
Role-Based Access Control (RBAC) has worked for decades. Give Bob the “editor” role. Bob can edit documents. Simple.
But RBAC assumes:
- Intermittent activity — Bob takes breaks
- Predictable patterns — Bob edits documents, not databases
- Social constraints — Bob knows not to delete production
AI agents break all three assumptions:
- They never stop
- They exhibit emergent behavior
- They have no social intuition
Give an agent the “editor” role and it might decide the most efficient edit is to delete everything and start over.
The Solution: Agent Capability Control (ACC)
ACC is a three-layer authorization framework designed for agents:
┌─────────────────────────────────────────┐
│ RBAC.md (Central) │
│ Organization-wide role definitions │
└─────────────────────────────────────────┘
│
┌───────────┴───────────┐
▼ ▼
┌───────────────┐ ┌─────────────────┐
│ SOUL.md │ │ SKILL.md │
│ Agent's caps │ │ Skill's needs │
└───────────────┘ └─────────────────┘
Layer 1: Central Policy (RBAC.md) Define your organization’s roles and capabilities:
admincan do everythingagentcan read/write data and post sociallyworkercan read data and fetch URLsguestcan read only
Layer 2: Agent Declaration (SOUL.md) Each agent declares what it’s allowed to do:
acc:
role: agent
capabilities:
- data:*
- social:write
denied:
- infra:*
Layer 3: Skill Requirements (SKILL.md) Each skill declares what it needs:
acc:
required:
- social:write
- external:post
At runtime: agent.capabilities ⊇ skill.required → ALLOWED
The Key Innovation: Monotonic Attenuation
When an agent spawns a sub-agent, permissions can only decrease, never increase.
Ada (agent) spawns Research-Bot (worker)
Ada has: [data:*, social:*, external:*]
Research-Bot gets: [data:read, external:fetch]
Research-Bot tries to post a tweet?
→ DENIED. social:write not in capabilities.
This is mathematically guaranteed. A sub-agent cannot grant itself more power than its parent. The permission chain only thins.
Prior Art
We didn’t invent this from scratch. ACC builds on proven patterns:
- Macaroons (Google, 2014) — Bearer tokens with caveats that can only be narrowed
- Biscuits (Eclipse Foundation) — Datalog policies with Ed25519 signing
We adapted these for the agent world: markdown-native, LLM-friendly, audit-ready.
What Gets Logged
Every authorization decision is recorded:
{
"timestamp": "2026-02-17T10:45:00Z",
"agent": "ada",
"skill": "publish-twitter",
"decision": "allowed",
"required": ["social:write", "external:post"],
"granted": ["social:*", "external:*"],
"parent_chain": ["owner:raza"]
}
You can answer: “Why did this agent have permission to do that?”
Read the Full Spec
The complete ACC specification and whitepaper:
Including:
- Technical specification
- 10,454-word whitepaper
- LaTeX source for publication
- 20 real citations from academic literature
This is Part 3 of our research series on provable agent infrastructure.