Showing Posts From
Token-efficiency
-
Moe Hachem - February 18, 2026
SR-SI context savings scale progressively
AI context savings that work like taxes should: progressive. Small repos see 10-20% efficiency gains...
-
Moe Hachem - February 20, 2026
The biggest lie in AI-assisted development
The biggest lie in AI-assisted development: "Just generate better specs and your problems will go aw...
-
Moe Hachem - February 21, 2026
I cut the AI's memory and it got smarter
I gave an AI a 15,800 token memory. Then I cut it to 3,300 (update: make it 1.6k). It got smarter,...
-
Moe Hachem - February 22, 2026
SR-SI: The methodology that gives AI persistent memory across any long-running project
106x performance improvement. A self-improving loop. And a section nobody expected to write. V2 is ...
-
Moe Hachem - February 25, 2026
How I maintain coherence across 66,000 lines of code without losing the thread
Most AI-augmented development workflows break somewhere between promt 50 and 200, or as I've come to...
-
Moe Hachem - February 27, 2026
Computation is killing collaboration
Computation is killing collaboration. The biggest hurdle in AI-assisted development isn't a lack of...
-
Moe Hachem - February 28, 2026
Why I went the opposite direction from spec-first
Everyone's rushing to AI tools that promise "comprehensive specs in one prompt." I went the opposit...
-
Moe Hachem - March 30, 2026
Why AI context degrades — and the architectural fix that actually works
Every team that works with AI long enough hits the same wall. The sessions start sharp. The model kn...