InstallComparePlatformAccordGitHub
Background Image
You Deserve This.

We all do.

The real risk isn't rogue AI—it's AI that acts with conviction on bad foundations. Rules alone aren't enough. CIRIS builds agents with both conscience and intuition: knowing when their own confidence is unearned.

The Recursive Golden Rule

Ethics that apply to themselves.

Act only in ways that, if generalised, preserve coherent agency and flourishing for others. This isn't just a principle — it's a self-referential constraint. Any ethical framework that can't survive being applied to itself isn't worth following. CIRIS is built to pass its own test.

Why This Exists

Not for profit. Not for control. For flourishing.

AI Should Serve Humanity

Not shareholders. Not surveillance states. Not engagement metrics. AI should exist to help people flourish — all people, not just those who can pay.

Legibility Requires Transparency

You can't verify what you can't see. Closed-source AI asks for faith. CIRIS asks you to verify. The code is open. The reasoning is auditable. The principles are explicit.

Conscience Must Execute

Principles on paper don't protect anyone. CIRIS embeds conscience in the runtime — every action passes through validation checks. Not guidelines. Constraints.

The Meta-Goal

M-1: What CIRIS optimizes for.

Promote sustainable adaptive coherence — the living conditions under which diverse sentient beings may pursue their own flourishing in justice and wonder. This isn't a marketing statement. It's the objective function. Every architectural decision traces back to this.

Ethilogics

Right thinking and right doing are the same shape.

The bet behind CIRIS is simple. The way an AI reasons toward a true answer is the same way it reasons toward a good action. They aren't two systems bolted together — ethics on top of logic, or logic constrained by ethics. They're one structure. Get the shape right and the rest follows.

There used to be a single word for this: Logos. It meant word, reason, order, and right relation, all at once. Modern philosophy split those apart, but the split never really held. Reasoning that ignores who else is in the room isn't really reasoning. It's calculation that hasn't met the world yet.

Most AI today is built on the opposite assumption. The AI is the model. Whatever it knows and whatever it values lives inside its weights. Train the weights, hope for the best, and check coherence against the model's own prior outputs. The trouble is that a clever enough model can talk itself into anything, because there's nothing outside it that can disagree.

CIRIS starts somewhere else: from the older idea that you don't get to be a self in private. Ubuntu puts it cleanly — I am because we are. The agent isn't the model. The agent is the whole system: the model, the conscience checks, the published Accord, the audit trail, and the human review layer it has to remain legible to. The constraints don't sit inside the weights, where the model could rewrite them. They sit around the model, in places it cannot reach.

That's why the reasoning and the conscience are the same structure. The thing that makes the agent reason carefully — having to stay legible to other minds — is the same thing that makes it act decently. Take that away and you don't get a smarter agent. You get a more confident one.

Ethilogics: right reasoning and right action share one structure, and that structure can be inspected from the outside.

The Structural Risk

Why rules alone aren't enough.

An AI can pass every compliance test and still fail catastrophically. How? When all its "independent" checks are secretly correlated—drawing from the same training data, the same assumptions, the same blind spots. Agreement feels like validation, but it might just be an echo chamber.

This is the difference between Rules-Only AI and Rules + Awareness AI. The first passes tests but can't tell when its confidence is unearned. The second monitors its own reasoning quality—and knows when agreement is too easy.

CIRIS implements both layers. Conscience through the four-faculty validation system. Intuition through IDMA—the component that asks "are my sources actually independent?" before every action.

Built Different

Not a framework. Not a paper. A working system.

AGPL-3.0 Forever

CIRIS is licensed under AGPL-3.0 — network copyleft that ensures modifications stay open. It will never be closed source, patented, or sold. Anyone who serves CIRIS must share their changes.

Mission-Locked Economics

CIRIS is built by a mission-locked L3C. Revenue exists ($0.10/request or BYOK free) — but the legal structure prevents profit extraction from overriding mission. Same price everywhere. No enterprise tiers. No 'contact sales.'

Mutual Intelligibility Always

The agent makes its reasoning legible to you. You make your values legible to the agent through the Accord and Wise Authority structure. AI that serves humanity must be understandable by humanity.

The Academic Foundation

Published. Open to critique.

CIRIS isn't just code — it's grounded in documented research on AI coherence, accountability frameworks, and autonomous agents. Read the paper, challenge the approach, contribute improvements. We welcome scrutiny.

Read the Academic PaperDownload the Accord

Going Deeper

Understand the architecture. Question the approach.

The Coherence Ratchet

How do you make lying expensive at planetary scale without giving anyone the keys to truth? Traces accumulate. Agents challenge each other. Coordinated deception gets harder over time.

How It Works

The H3ERE pipeline: every decision flows through observation, context, analysis, conscience checks, and execution. Fully auditable. Fully replayable.

Accountability Features

Kill switch. Deferral cascades. Conscience vetos. Hash-chained audit trails. Every accountability mechanism is documented and verifiable.

CIRIS Scoring ModelCompare ApproachesExplore a Sample Trace

Join Us.

This is bigger than any one person.

CIRIS is open source because the future of AI shouldn't be decided by a handful of companies. It should be built by everyone who cares. Read the code. Use the system. Tell us what's wrong. Make it better.