Showing Posts From

Research

The LLM Structural Crisis: Solving Context Decay with the AI Memory Prosthesis

The LLM Structural Crisis: Solving Context Decay with the AI Memory Prosthesis

When building complex systems with Large Language Models, I realized that the real crisis was not th...

The 200-Prompt Wall

The 200-Prompt Wall

I've spent the better part of this year building prototypes with AI assistance. Three production pro...

SR-SI context savings scale progressively

SR-SI context savings scale progressively

AI context savings that work like taxes should: progressive. Small repos see 10-20% efficiency gains...

What if AI could learn to remember?

What if AI could learn to remember?

What if AI could learn to remember? It already does. Just not the way you think. We've been solvi...

The way Einstein's brain worked is how AI should retrieve information

The way Einstein's brain worked is how AI should retrieve information

The way Einstein's brain worked is exactly how AI should retrieve information. It doesn't. Yet. Ein...

I cut the AI's memory and it got smarter

I cut the AI's memory and it got smarter

I gave an AI a 15,800 token memory. Then I cut it to 3,300 (update: make it 1.6k). It got smarter,...

SR-SI: The methodology that gives AI persistent memory across any long-running project

SR-SI: The methodology that gives AI persistent memory across any long-running project

106x performance improvement. A self-improving loop. And a section nobody expected to write. V2 is ...

RAG gives AI a library. SR-SI gives it something closer to a memory

RAG gives AI a library. SR-SI gives it something closer to a memory

RAG gives AI a library. SR-SI gives AI something closer to a memory. The difference is smaller than...

Computation is killing collaboration

Computation is killing collaboration

Computation is killing collaboration. The biggest hurdle in AI-assisted development isn't a lack of...