Multi-Tenant Permissions vs Claude Code ACL
A restaurant shouldn't see another's data. An AI shouldn't run rm -rf / without asking. Two domains, one problem: who can do what?
Permission systems are boring — until they fail. In a multi-tenant restaurant platform, a misconfigured permission means the competitor sees your menu and pricing. In Claude Code, a poorly evaluated permission means the AI deletes your main branch without asking.
This article compares two production approaches: the multi-tenant RBAC of Cadences Platform (used by dozens of SaaS verticals) and the tri-state ACL system of Claude Code (which invented something most RBAC frameworks lack: the "ask" state).
At the end we propose a hybrid pattern: AI-Enhanced RBAC — classic RBAC with the three superpowers Claude Code discovered.
Classic Multi-Tenant RBAC
Cadences Platform runs multiple SaaS verticals (restaurants, travel, healthcare) on shared Cloudflare Workers + D1 infrastructure. Tenant isolation is absolute: every query carries WHERE tenant_id = ?.
🏢 RBAC Architecture
// Cadences Permission Model Tenant (restaurant/agency/clinic) └── Roles ├── owner → full access + billing ├── admin → settings + staff management ├── manager → operations + analytics ├── staff → daily operations only └── viewer → read-only Enforcement: Workers middleware request → extract tenant_id → extract role → check: role >= required_role? → allow / deny Resources: menu_items, orders, settings, integrations, analytics, staff
It's a proven, simple model. Permissions resolve in a single comparison: does the user's role have access to this resource? Yes or no. No ambiguity, no intermediate states.
✓ Strengths
- Simple: one role × resource table
- Absolute tenant isolation
- Predictable (no surprises)
- Auditable in D1
- Near-zero latency
✗ Weaknesses
- Binary: no "depends on context"
- Doesn't differentiate by operation risk
- No human confirmation for edge cases
- Scales poorly with granular operations
- No AI evaluating anomalous patterns
Tri-State ACL with Cascading
Claude Code invented something most RBAC frameworks lack: a third state. It's not allow. It's not deny. It's ask — ask the human whether this specific operation is okay.
🔐 Permission System
// Claude Code Permission Sources (cascading) Level 1: Platform rules ← Anthropic defines Level 2: Organization ACL ← company defines Level 3: Project ACL ← .claude/settings.json Level 4: User preferences ← ~/.claude/settings.json // Evaluation order (deny-first) for source in [platform, org, project, user]: if source.denies(tool, args): → DENY if source.allows(tool, args): → ALLOW if mode == "auto": → consult yolo classifier if mode == "plan": → ASK always if mode == "default": → ASK if destructive
The 3 States
Executes without asking. For read-only tools and safe operations.
Never executes. For operations prohibited by policy.
Human decides. For variable-risk operations that depend on context.
The Yolo Classifier: AI as Judge
When the mode is auto and a tool is in "ask" state, Claude Code doesn't ask directly — it first consults the yolo classifier: an AI model that evaluates whether the specific operation is safe in the current context.
LOW risk → auto-approve MEDIUM/HIGH risk → ask the human 💡 Why does this matter? The yolo classifier turns binary permissions into contextual permissions. write_file("test.js") is auto-approved. write_file("package.json") asks. Same tool, different risk based on the arguments.
✓ Strengths
- Tri-state: allows "depends on context"
- 4 cascading sources (high flexibility)
- Content-aware (glob patterns)
- AI evaluates per-operation risk
- Safety circuit breaker
✗ Weaknesses
- Ephemeral (session-only, doesn't persist)
- No multi-tenant isolation
- Complexity: 4 sources × 3 states
- Risk of false positives in classifier
- Classifier latency (model call)
Comparison Table
| Aspect | Cadences | Claude Code |
|---|---|---|
| Type | Multi-tenant RBAC | Cascading rules + AI judge |
| States | 2 (allow/deny) | 3 (allow/deny/ask) |
| Granularity | Role → Resource | Tool + Content (glob patterns) |
| Sources | 1 (user role) | 4 (platform → org → project → user) |
| AI for permissions | No | Yes (yolo classifier) |
| Human confirmation | No (binary) | Yes (ask mode) |
| Persistence | D1 (permanent) | Session (ephemeral) |
| Audit | D1 logs | In-session tracking |
| Override | Admin only | User overrides project |
| Glob matching | No | Yes (*.sql, tests/*) |
Why "Ask" Changes Everything
Traditional RBAC is binary. You can or you can't. But in the real world, there are operations that depend on context:
🏢 In Cadences (today: binary)
- Delete a menu item? ALLOW if manager
- Delete all menu items? Also ALLOW if manager
- Change price from €1 to €100? ALLOW if manager
- ⚠️ All three have the same "permission" but very different risks
🤖 With tri-state (proposed)
- Delete one item? ALLOW
- Delete entire menu? ASK owner review
- Change price >50%? ASK confirmation
- ✓ Same role, but risk evaluated by content
"Ask" is the state most RBAC systems are missing. Not everything is black or white — sometimes the right answer is "you can, but confirm with someone first."
4 Levels of Control
Claude Code has 4 cascading permission sources. Each level can override the one before it with deny (can never re-allow a deny from a higher level). Cadences has a natural equivalent:
Claude Code
Cadences (proposed)
🎯 Content-aware rules: Claude Code uses glob patterns (*.sql deny, tests/*.sql allow). Cadences could use the same concept: menu.* allow, menu.bulk_delete ask. Content-based granularity is the leap that separates static RBAC from intelligent RBAC.
AI Evaluating Permissions in SaaS
Claude Code's yolo classifier is a model that classifies operations by risk. Could Cadences use the same concept? Absolutely:
Bulk delete detection
A manager deletes 3 items → normal. A manager deletes 200 items → AI evaluates as anomalous → ASK the owner.
Price anomaly detection
Price change from €12 to €13 → normal. Change from €12 to €120 → AI detects anomaly → ASK confirmation.
Off-hours operations
Staff edit at 10:00 AM → normal. Staff bulk edit at 3:00 AM → AI detects unusual pattern → ASK.
⚠️ Risks of AI-as-Judge
False positives
Legitimate operations blocked. The manager really wants to delete the menu for renovation.
Latency
The classifier needs a model call. In hot paths this could add 100-300ms.
Model failures
If the classifier fails, auto-approve? Claude Code: deny by default. Correct.
Design Pattern: AI-Enhanced RBAC
Combining the best of both worlds, we propose a 4-layer pattern for SaaS systems with AI tools:
🔮 AI-Enhanced RBAC
Traditional RBAC — Role → Resource → Allow/Deny. The base stays the same. Simple, predictable, auditable.
Add "Ask" state — For high-risk operations within an allowed role. Doesn't change the model — adds a third possible value.
Cascading sources — Platform → Vertical → Tenant → User. Each level inherits and can override. Deny always dominates.
AI classifier on "Ask" — When state is Ask, a model evaluates the specific arguments. Low risk → auto-approve. High risk → escalate to human.
Circuit breaker — After N consecutive anomalies, the system switches to deny-all for the session. Exactly like Claude Code.
⚡ Implementation Sketch
// AI-Enhanced RBAC in Cadences Workers async function evaluatePermission(request, env) { const { tenantId, role, action, resource, args } = request; // Layer 0: RBAC base const rbac = await getRBAC(env.DB, tenantId, role); if (rbac[resource] === 'deny') return DENY; if (rbac[resource] === 'allow') { // Layer 1: is this an ask operation? const askRules = await getAskRules(env.DB, tenantId); const matchedRule = askRules.find(r => r.action === action && matchGlob(r.pattern, args) ); if (!matchedRule) return ALLOW; // Layer 2: AI classifier const risk = await classifyRisk(env.AI, { action, resource, args, context: { tenantId, role, timestamp: Date.now() } }); if (risk === 'LOW') return ALLOW; if (risk === 'MEDIUM') return ASK(args.reviewer); if (risk === 'HIGH') return DENY; } // Layer 3: circuit breaker if (await getConsecutiveDenials(env.DB, tenantId) >= 3) { return DENY_ALL_SESSION; } return ASK(role.owner); }
RBAC isn't dead — it was missing three things: a third state (ask), cascading sources, and an AI model that understands the content of the operation, not just the type.
Does your SaaS need smarter permissions?
At Cadences we implement AI-Enhanced RBAC — the pattern we discovered by comparing our multi-tenant system with Claude Code. If your permissions are too binary, we can help.
Gonzalo Monzón
Founder of CadencesLab. Multi-tenant permissions architect by day, Claude Code analyst by night, obsessed with making sure no AI does what it shouldn't.