SR-SI context savings scale progressively
-
Moe Hachem - February 18, 2026
AI context savings that work like taxes should: progressive. Small repos see 10-20% efficiency gains. Large repos? 40-60%. The bigger your codebase, the more SR-SI helps. The AI building Olives just proved it.
Over the weekend, I built Olives using SR-SI (the AI memory methodology I published last year). The AI agent doing the actual implementation just gave me feedback on the process.
The results validated something important:
SR-SI doesn’t just work - it scales with complexity.
What the agent reported
Efficiency gains
- 30-45% less re-orientation work across turns
- 15-30% fewer “re-derive/re-explain” tokens on later turns
- Noticeably better error/regression avoidance (decisions + task state were explicit and auditable)
Why it helped
- Prevented scope drift
- Shallow index made file targeting faster
- Additive schema policy stayed consistent across changes
Scaling projection
- Small repo (50-150 files): 10-20% workflow savings
- Mid repo (300-800 files): 25-40% savings
- Large repo (1000+ files, multi-feature): 40-60% savings
With the caveat: if governance slips (docs drift), those can drop to ~15-30% even on large repos.
The honest limitations
- Docs can still go stale if not maintained with code
- Doesn’t remove the need to read changed source/test files
- Tool output still dominates token usage during heavy verify loops
Agent’s overall take
“Strong system for long, iterative engineering work. Best when kept short, current, and tied to concrete code paths. Weak if it turns into verbose documentation not maintained with code.”
What this means
Most AI workflows break down as projects grow. SR-SI inverts this: the bigger the codebase, the more valuable structured context management becomes.
Progressive savings where complexity works in your favor, not against you.
The methodology that let AI maintain coherence across thousands of prompts and across many sessions in my research also kept the AI building Olives oriented across 100+ build iterations.
Turns out the principles scale.
Olives is still early in the process - but watching the system prove itself through its own application is pretty satisfying.