187 Sessions to Build It. One Night to Ship It.
For two months straight, night and day, we used AI coding assistants to build an AI governance system. 187 sessions. 16 patents filed. A memory engine, a search engine, a compliance gate, session management, context monitoring. All of it built while using the product in production to build itself.
Along the way we documented 27 process failures. Not bugs. Behavioral failures. The AI would spend 5 minutes searching for a file it already knew the location of. It would grep the filesystem when the answer was already in its own memory. We’d correct it. It would say “won’t happen again.” Then it would.
The memory worked. The search worked. The governance hooks worked. But the decision layer belonged to the platform. We could influence it with prompts. We could block bad actions with hooks. But we couldn’t make it decide correctly in the first place.
The Problem Was the Decision Layer
Every AI coding tool today (Claude Code, Cursor, Copilot, Windsurf) lets the AI make every decision. Which tool to use. What to search. When to delegate. There’s no independent check on those decisions. No governance enforcement. The AI decides, and you hope it decides well.
That’s fine for writing code. It’s not fine when AI controls something that can hurt someone.
So We Built Our Own Agent
After months of architecture design, documentation, refactoring, and production testing, we built the CxMS Agent. 21 files. 3,695 lines. The code came fast because every design decision had already been made through 187 sessions of real work.
The key innovation is a deterministic router that sits before the AI. When the request is predictable (“show me this file,” “search for this term”), code handles it directly. No AI involved. Zero tokens. Zero cost. The AI only fires when the router encounters something it can’t handle with simple logic.
One Architecture. Two Markets.
Here’s what stopped us cold at 2am: this is the exact same architecture as our robot safety system. Every component maps 1:1.
- Known file paths become known physical locations (walls, shelves, charging stations)
- Governance rules become safety constraints (speed limits, no-go zones, force limits)
- The tool registry becomes an actuator registry (motors, grippers, sensors)
- The deterministic router becomes the safety verdict aggregator
- The JSONL audit trail becomes EU AI Act compliance evidence
One codebase. A coding agent that validates the architecture at zero hardware cost. A robot brain that deploys the same architecture on physical hardware. Patent pending.
The Only Human-Controlled AI Memory Editor
We also built something nobody else in any market has: a dashboard where a human can review, edit, and control what the AI knows and believes before it acts. Pin critical facts. Suppress wrong information. Resolve contradictions. Verify the AI’s understanding.
For a coding assistant, that’s a competitive advantage. For a robot caring for your elderly parent, that’s a safety requirement. Same dashboard. Same patent. Different stakes.
The Product Designed Itself
16 patents. 156 claims. 187 sessions. 27 process failures that became design requirements. Built by using the product to build the product.
The product designed itself.