Back to Blog
Claude Code Comparison 2 of 4 14 min read

NutriNen vs Claude Code: Context Management

A pediatric consultation can't forget an allergy. A coding session can't forget a user instruction. Two domains, one limitation: the LLM's context window is finite.

G
Gonzalo Monzón
Data flow representing context management in AI systems

Every application built on an LLM has the same problem: the context window is finite. What doesn't fit gets lost. And what gets lost can be dangerous.

In NutriNen Baby — our pediatric nutrition assistant — the consequence of forgetting an allergy is a dangerous food recommendation. In Claude Code, the consequence of forgetting a user instruction is a refactor that ignores the project's rules. Different domain, same risk architecture.

This article compares how two production systems solve the context management problem. It's not theoretical — these are implemented solutions, tested on real users, with measurable trade-offs.

NutriNen Baby

Profile-Based Context: Smart Selection

NutriNen doesn't compress context. It selects. The difference is fundamental: instead of reducing a long conversation to a summary, it picks exactly which data to inject in each query.

🍼 Context Architecture

// NutriNen Context Pipeline
Context Window = [
  Baby Profile       → D1 (permanent)
    ├── Age: 8 months
    ├── Weight: 8.2 kg
    ├── Allergies: [cow's milk protein]
    ├── Restrictions: [no gluten until 12m]
    └── Feeding stage: BLW phase 2

  Recent History    → D1 (last N queries)
    └── "Can she eat eggs?" → "Yes, from 6m..."

  Current Question   → User input
    └── "Can I give her plain yogurt?"
]

// No compression. Structured data + selection.

The baby profile lives in Cloudflare D1 as structured data. It's not free text — it's a typed object: age, weight, allergies (array), restrictions (array), feeding stage (enum). This means allergies are never lost because they're not in the conversation — they're in the database.

Strengths

  • Critical data never lost (structured + DB)
  • Always predictable context window
  • Profile reusable across sessions
  • Domain-specific (optimized for pediatrics)

Weaknesses

  • Can't handle truly long conversations
  • Fixed context (doesn't adapt dynamically)
  • No progressive degradation
  • Loses nuances from previous conversation
VS
Claude Code

4 Tiers of Progressive Compression

Claude Code takes the opposite approach: everything lives in the conversation, and when space runs out, it compresses progressively. It doesn't choose what to inject — it tries to keep everything, gradually sacrificing detail.

T0

Normal Operation

Everything in memory. No compression. Entire conversation in context. Works for short sessions (<30 min).

T1

Microcompact

Cleans old tool results. If Claude read a file 50 messages ago, the file content gets replaced by a placeholder. Tool calls are preserved.

T2

API-Native Clearing

Delegates to Anthropic's server. The server decides which messages to keep based on recency and relevance. Claude Code doesn't control exactly what's lost.

T3

Full Compaction

Full summary in 9 mandatory sections. The conversation gets replaced by a structured summary. Then re-reads the 5 most recent files to restore working context.

📋 The 9 Summary Sections (Tier 3)

1. Primary objective
2. Technical context
3. Work completed
4. Current task
5. Pending work
6. User messages ⚠️ MUST NOT OMIT
7. Key errors
8. Directory state
9. Conventions

Strengths

  • Handles hours-long sessions (3-8h)
  • Progressive degradation (no sudden failure)
  • Post-compact recovery (re-reads files)
  • Generic (works for any project)

Weaknesses

  • Lossy — inevitably loses details
  • No structured data (everything is text)
  • Nothing persists across sessions
  • Recovery depends on files (not always available)
Head to Head

Comparison Table

Aspect NutriNen Claude Code
DomainPediatric healthCode
Critical dataAllergies, restrictionsUser instructions
PersistenceD1 (permanent)Per-session
StrategySelection (what to inject)Progressive compression
Tiers1 (inject or not)4 (progressive)
Info lossNever (structured)Details (lossy)
Typical session5-30 min30 min - 8 hours
RecoveryRead from DBRe-read files
Sacred dataMedical profileUser messages (§6)
Critical Pattern

The "Sacred Data" Concept

Both systems have data that must never be lost. The difference lies in how they protect it:

🍼

NutriNen: Allergies

Sacred

A baby's allergies live in D1 as structured data. No matter how many messages are exchanged — allergies are injected in every query directly from the database.

Protection: Persistent storage. Doesn't touch the context window.

🤖

Claude Code: User Messages

Sacred

Section 6 of the Tier 3 summary is marked as MUST NOT OMIT. User instructions are the last thing to be lost, but they can be lost if the summary is too long.

Protection: Priority in the summary. But still compressible text.

💡 Universal pattern: every LLM application needs to define which data is sacred — data that can never be lost under any circumstances. The correct implementation is persistent storage, not context priority. What's in context can always be compressed; what's in the DB can't.

Cross-Lessons

What Each Can Learn

NutriNen ← Claude Code

Progressive compression for long queries

Some parents ask a lot. A 30-minute consultation with 20 back-and-forths can exceed the window. NutriNen could implement Tier 1 (summarize previous responses) and Tier 2 (keep only questions and key answers).

Session memory for doctors

Periodic consultation notes — like Claude Code's session memory — would allow pediatricians to review consultations afterward, with an automatic summary of what was discussed.

Post-compact recovery

After compressing, Claude Code re-reads the 5 most recent files. NutriNen could re-inject the full profile and the last 2 key interactions after any context loss.

Claude Code ← NutriNen

Sacred data in persistent storage

Instead of "MUST NOT OMIT user messages" in a compressible summary, user instructions should live in persistent storage — like CLAUDE.md or a local DB. Never lost, regardless of how many compactions occur.

Structured project profile

NutriNen has a "baby profile" with typed data. Claude Code could have a "project profile": framework, language, team conventions, preferred patterns — structured data in a DB, not conversation text.

On-demand knowledge injection

NutriNen injects only data relevant to the current question. Claude Code loads the entire conversation and then compresses. Injecting only context relevant to the current task would be more efficient.

Synthesis

Design Pattern: Hybrid Context

If we combine the best of both approaches, a 4-layer pattern emerges:

🔮 Hybrid Context Architecture

L1

Sacred Data → Persistent Storage — Allergies, user instructions, project config. In D1/DB, never in the context window as primary data.

L2

Working Context → In-Memory + Progressive Compression — The active conversation, with Claude Code's 4 tiers. Gets compressed, but sacred data is never lost (lives in L1).

L3

Domain Knowledge → On-Demand Injection — Like NutriNen: only knowledge relevant to the current question. Working in React? Inject React patterns. Asking about allergies? Inject allergy table.

L4

Session State → Periodic Snapshots — Automatic notes every N messages (like Claude Code's session memory), persisted in storage for audit and cross-session continuity.

Context management isn't a compression problem — it's a classification problem. Data that matters goes to the DB. Working data goes to context. Reference data gets injected on-demand. Everything else can be compressed without risk.

Is your LLM app losing context?

At Cadences we implement hybrid context management patterns — the same ones we use in NutriNen and discovered in Claude Code. If your chatbot forgets things, we can help.

G

Gonzalo Monzón

Founder of CadencesLab. NutriNen Baby engineer, Claude Code analyst, and obsessed with making sure no LLM forgets data that matters.

Newsletter

Don't miss any story

Subscribe to receive new releases, exclusive chapters, and behind-the-scenes content.

  • Weekly insights & articles
  • Exclusive content & early access
  • No spam, unsubscribe anytime

We respect your privacy. Unsubscribe anytime.