501(c)(3) Non-Profit Organization

OpenCxMS Foundation

Advancing transparent, human-readable context management for artificial intelligence.

AI memory should be open.

Our Mission

The OpenCxMS Foundation exists to advance the development, documentation, and education of transparent context management systems for artificial intelligence—enabling humans to maintain meaningful oversight of AI systems through open, auditable, human-readable memory.

We believe the future of AI must be one where humans remain in control, where AI "memory" is not a black box, and where transparency is built in from the start.

The Open Context Principles

We believe:

1

Humans and AI Are Partners

Neither is complete alone. Humans bring purpose, ethics, creativity, and direction. AI brings capability, speed, scale, and pattern recognition.

Together: intelligent action with purpose.

AI without human direction is capability without purpose. Humans with AI partnership can achieve what neither could alone.

2

Human Expertise Has Value

When skilled humans train AI systems, that training is an intellectual contribution. Human expertise, creativity, and experience have real value.

The future of AI should elevate human work, not erase it.

We believe humans who contribute to AI capability should be recognized, respected, and valued—not replaced or extracted from.

3

Physical AI Requires Transparency

As AI moves into physical form—robots, autonomous systems, critical infrastructure—the stakes multiply.

Open, auditable context is not optional for embodied AI. It is a safety requirement.

Society will demand to know what these systems have been taught. Regulators will require audit trails. Liability will require traceability.

The Vision

As robots enter homes, hospitals, warehouses, and streets, society will demand to know what they've been taught.

Open context isn't optional—it's inevitable.

The question is: who builds the standard?

We're building it. In the open. For everyone.

Why Open Source?

CxMS is MIT-licensed. Free to use. Free to modify. Free to build upon.

We chose open source because:

  • AI infrastructure should be public infrastructure
  • Vendor lock-in contradicts our transparency principles
  • The best standards emerge from community adoption
  • Trust requires verifiability—closed source can't be verified

Our commitment: the core framework will always be open.

Why a Non-Profit Foundation?

We formed a 501(c)(3) because:

  • Our mission is public benefit, not shareholder returns
  • Educational and safety work deserves charitable status
  • Grant funding enables long-term research without VC pressure
  • Non-profit structure aligns incentives with our principles

We exist to serve the community, not to extract from it.

Organizational Structure

The Foundation operates as a 7-Ring organizational model

1

Core Foundation

OpenCxMS Foundation

Mission, governance, and legal structure. Building safe, transparent AI infrastructure.

2

Allied Partners

AI Safety Organizations

Mission-aligned non-profits and research institutions.

3

Training Services

Data Ninja Dojo

Professional AI training and certification programs.

4

Products

CxMS & Tools

Open-source frameworks, plugins, and developer tools.

5

Hardware Partners

Robotics OEMs

Physical AI embodiment and hardware integrations.

6

Expertise Network

Human Trainers

Domain experts who train and validate AI systems.

7

Special Operations

AI-SOG

Elite operators for complex, high-stakes deployments.

OpenCxMS Studios

Our creative division handles marketing, media, graphics, and game development. Building engaging ways to demonstrate AI-human collaboration.

See Our Work

The Problem We're Solving

The Black Box Problem

Current AI systems store their "memory" in formats humans cannot read:

  • • Neural network weights
  • • Embedding vectors
  • • Opaque model states

This means:

  • ✗ Humans can't verify what AI "knows"
  • ✗ No audit trail when AI makes decisions
  • ✗ Hidden behaviors can emerge undetected
  • ✗ AI systems can develop patterns invisible to operators

Our Solution

CxMS stores AI context in plain-text files:

  • • Markdown documents humans can read
  • • JSON files humans can inspect
  • • Version-controlled history humans can audit

This means:

  • ✓ Humans can verify AI context before deployment
  • ✓ Every piece of AI "knowledge" is traceable
  • ✓ Hidden behaviors become visible
  • ✓ Operators remain in control

Join the Movement

Developers

  • Use CxMS in your projects
  • Contribute to the framework
  • Help build the standard
Get Started

Organizations

  • Adopt open context principles
  • Partner with us on research
  • Support the foundation
Contact Us

Researchers

  • Cite our work
  • Collaborate on AI safety
  • Help advance the standard
Learn More

Funders

  • Support open-source AI
  • Fund research & development
  • Enable the mission
Sponsor

Leadership

Robert S Briggs II

Founder

CxMS is built by a growing community of developers who believe in transparent AI.

See our contributors on GitHub →

"AI memory should be open."

— OpenCxMS Foundation