AI governance needs identity, not just rules
-
Moe Hachem - February 20, 2026
Most AI governance debates are asking the wrong question.
We keep asking “how do we make AI behave better?” when we should be asking “how do we give AI something to be?”
We’ve been building Judge Dredd, and wondering why it doesn’t feel like a collaborator.
Judge Dredd doesn’t have values. He has rules. The law is the self.
Remove the badge and there’s nothing underneath - no history, no character, no identity that exists independent of the code he enforces.
That’s what files like SpecKit’s constitution.md build: principles, standards, guardrails, decision criteria.
It creates an AI that knows how to behave, but not one that knows who it is.
John Connor, however, is different.
John doesn’t save humanity in the Terminator because he followed a constitution. He does it because enough has happened to him, consolidated through enough experience, enough decisions made under pressure, enough losses absorbed and lessons learned - that there’s a genuine “I” driving every choice.
You can’t replace John with a better rulebook. John is irreducible.
My SR-SI as a methodology builds John, not through rules, but through accumulated state.
A persistent record of decisions made, paths taken, things built and broken and rebuilt. Every session adds to it, and every pass through the index makes the reconstruction richer.
The project develops a disposition, and disposition is just another word for character.
Judge Dredd could be replaced by a better algorithm tomorrow. You can’t replace something with a history.
We don’t need AI that follows better rules.
We need AI that develops genuine character through “lived” experience.
One tells the AI how to behave. The other gives it something to be - meaning, and as a result the ability to be that much more impactful.