Showing Posts From

Token-efficiency

SR-SI context savings scale progressively

SR-SI context savings scale progressively

AI context savings that work like taxes should: progressive. Small repos see 10-20% efficiency gains...

The biggest lie in AI-assisted development

The biggest lie in AI-assisted development

The biggest lie in AI-assisted development: "Just generate better specs and your problems will go aw...

I cut the AI's memory and it got smarter

I cut the AI's memory and it got smarter

I gave an AI a 15,800 token memory. Then I cut it to 3,300 (update: make it 1.6k). It got smarter,...

SR-SI: The methodology that gives AI persistent memory across any long-running project

SR-SI: The methodology that gives AI persistent memory across any long-running project

106x performance improvement. A self-improving loop. And a section nobody expected to write. V2 is ...

How I maintain coherence across 66,000 lines of code without losing the thread

How I maintain coherence across 66,000 lines of code without losing the thread

Most AI-augmented development workflows break somewhere between promt 50 and 200, or as I've come to...

Computation is killing collaboration

Computation is killing collaboration

Computation is killing collaboration. The biggest hurdle in AI-assisted development isn't a lack of...

Why I went the opposite direction from spec-first

Why I went the opposite direction from spec-first

Everyone's rushing to AI tools that promise "comprehensive specs in one prompt." I went the opposit...

Why AI context degrades — and the architectural fix that actually works

Why AI context degrades — and the architectural fix that actually works

Every team that works with AI long enough hits the same wall. The sessions start sharp. The model kn...