I accidentally fixed AI forgetting without embeddings, fine-tuning, or a database

I accidentally fixed AI forgetting without embeddings, fine-tuning, or a database

I accidentally fixed AI “forgetting” without a database, embeddings, or fine-tuning through SR-SI.

I’m a village boy from the middle of nowhere, trained as an architect, with no lab, no funding, and no team. What I did have was a deadline, a growing codebase, an increasingly forgetful AI, and an obsession with solving it.

So I did what architects do: I stopped throwing more tech at the problem and looked at the structure instead.

While everyone was building bigger pipes to move more water, longer prompts, bigger context windows, heavier infrastructure… I asked one question:

Why does the water need to move at all?

The answer was a markdown file and five rules.

No database.
No embedding pipeline.
No model fine-tuning.

Just a shallow index the AI reads before every task, and updates when it’s done.

MIT CSAIL recently published work on Recursive Language Models tackling the same underlying failure mode: context rot in long-running AI work.

Brilliant work. Serious engineering.

And funny enough, same core insight:
Stop treating memory like “more tokens.” Treat it like reconstruction from an external map.

My point isn’t “I beat MIT.” My point is simpler:
The simplest solution that correctly understands the problem isn’t a compromise. It’s what understanding looks like.

And it turns out design-first thinking travels well - outside architecture, outside design, and straight into how we build with AI.

The best part? You can use my SR-SI methodology today.
It requires no infrastructure, no budget.
Just a text file and a protocol.