OpenCxMS Foundation
Advancing transparent, human-readable context management for artificial intelligence.
AI memory should be open.
Our Mission
The OpenCxMS Foundation exists to advance the development, documentation, and education of transparent context management systems for artificial intelligence—enabling humans to maintain meaningful oversight of AI systems through open, auditable, human-readable memory.
We believe the future of AI must be one where humans remain in control, where AI "memory" is not a black box, and where transparency is built in from the start.
The Open Context Principles
We believe:
Humans and AI Are Partners
Neither is complete alone. Humans bring purpose, ethics, creativity, and direction. AI brings capability, speed, scale, and pattern recognition.
Together: intelligent action with purpose.
AI without human direction is capability without purpose. Humans with AI partnership can achieve what neither could alone.
Human Expertise Has Value
When skilled humans train AI systems, that training is an intellectual contribution. Human expertise, creativity, and experience have real value.
The future of AI should elevate human work, not erase it.
We believe humans who contribute to AI capability should be recognized, respected, and valued—not replaced or extracted from.
Physical AI Requires Transparency
As AI moves into physical form—robots, autonomous systems, critical infrastructure—the stakes multiply.
Open, auditable context is not optional for embodied AI. It is a safety requirement.
Society will demand to know what these systems have been taught. Regulators will require audit trails. Liability will require traceability.
The Vision
As robots enter homes, hospitals, warehouses, and streets, society will demand to know what they've been taught.
Open context isn't optional—it's inevitable.
The question is: who builds the standard?
We're building it. In the open. For everyone.
Why Open Source?
CxMS is MIT-licensed. Free to use. Free to modify. Free to build upon.
We chose open source because:
- •AI infrastructure should be public infrastructure
- •Vendor lock-in contradicts our transparency principles
- •The best standards emerge from community adoption
- •Trust requires verifiability—closed source can't be verified
Our commitment: the core framework will always be open.
Why a Non-Profit Foundation?
We formed a 501(c)(3) because:
- •Our mission is public benefit, not shareholder returns
- •Educational and safety work deserves charitable status
- •Grant funding enables long-term research without VC pressure
- •Non-profit structure aligns incentives with our principles
We exist to serve the community, not to extract from it.
Organizational Structure
The Foundation operates as a 7-Ring organizational model
Core Foundation
OpenCxMS FoundationMission, governance, and legal structure. Building safe, transparent AI infrastructure.
Allied Partners
AI Safety OrganizationsMission-aligned non-profits and research institutions.
Training Services
Data Ninja DojoProfessional AI training and certification programs.
Products
CxMS & ToolsOpen-source frameworks, plugins, and developer tools.
Hardware Partners
Robotics OEMsPhysical AI embodiment and hardware integrations.
Expertise Network
Human TrainersDomain experts who train and validate AI systems.
Special Operations
AI-SOGElite operators for complex, high-stakes deployments.
OpenCxMS Studios
Our creative division handles marketing, media, graphics, and game development. Building engaging ways to demonstrate AI-human collaboration.
The Problem We're Solving
The Black Box Problem
Current AI systems store their "memory" in formats humans cannot read:
- • Neural network weights
- • Embedding vectors
- • Opaque model states
This means:
- ✗ Humans can't verify what AI "knows"
- ✗ No audit trail when AI makes decisions
- ✗ Hidden behaviors can emerge undetected
- ✗ AI systems can develop patterns invisible to operators
Our Solution
CxMS stores AI context in plain-text files:
- • Markdown documents humans can read
- • JSON files humans can inspect
- • Version-controlled history humans can audit
This means:
- ✓ Humans can verify AI context before deployment
- ✓ Every piece of AI "knowledge" is traceable
- ✓ Hidden behaviors become visible
- ✓ Operators remain in control
Join the Movement
Organizations
- →Adopt open context principles
- →Partner with us on research
- →Support the foundation
Leadership
Robert S Briggs II
Founder
CxMS is built by a growing community of developers who believe in transparent AI.
See our contributors on GitHub →"AI memory should be open."
— OpenCxMS Foundation