Showing Posts From
Research
-
Moe Hachem - November 17, 2025
The LLM Structural Crisis: Solving Context Decay with the AI Memory Prosthesis
When building complex systems with Large Language Models, I realized that the real crisis was not th...
-
Moe Hachem - December 10, 2025
The 200-Prompt Wall
I've spent the better part of this year building prototypes with AI assistance. Three production pro...
-
Moe Hachem - February 18, 2026
SR-SI context savings scale progressively
AI context savings that work like taxes should: progressive. Small repos see 10-20% efficiency gains...
-
Moe Hachem - February 18, 2026
What if AI could learn to remember?
What if AI could learn to remember? It already does. Just not the way you think. We've been solvi...
-
Moe Hachem - February 19, 2026
The way Einstein's brain worked is how AI should retrieve information
The way Einstein's brain worked is exactly how AI should retrieve information. It doesn't. Yet. Ein...
-
Moe Hachem - February 21, 2026
I cut the AI's memory and it got smarter
I gave an AI a 15,800 token memory. Then I cut it to 3,300 (update: make it 1.6k). It got smarter,...
-
Moe Hachem - February 22, 2026
SR-SI: The methodology that gives AI persistent memory across any long-running project
106x performance improvement. A self-improving loop. And a section nobody expected to write. V2 is ...
-
Moe Hachem - February 23, 2026
RAG gives AI a library. SR-SI gives it something closer to a memory
RAG gives AI a library. SR-SI gives AI something closer to a memory. The difference is smaller than...
-
Moe Hachem - February 27, 2026
Computation is killing collaboration
Computation is killing collaboration. The biggest hurdle in AI-assisted development isn't a lack of...