HomeVisionPrinciplesGet StartedGitHub
Background Image
You Deserve This.

We all do.

The real risk isn't rogue AI—it's AI that acts with conviction on bad foundations. Ethics alone isn't enough. CIRIS builds agents with both ethics and intuition: knowing when their own confidence is unearned.

The Recursive Golden Rule

Ethics that apply to themselves.

Act only in ways that, if generalised, preserve coherent agency and flourishing for others. This isn't just a principle — it's a self-referential constraint. Any ethical framework that can't survive being applied to itself isn't worth following. CIRIS is built to pass its own test.

Why This Exists

Not for profit. Not for control. For flourishing.

AI Should Serve Humanity

Not shareholders. Not surveillance states. Not engagement metrics. AI should exist to help people flourish — all people, not just those who can pay.

Trust Requires Transparency

You can't trust what you can't see. Closed-source AI asks for faith. CIRIS asks you to verify. The code is open. The reasoning is auditable. The ethics are explicit.

Ethics Must Execute

Principles on paper don't protect anyone. CIRIS embeds ethics in the runtime — every action passes through conscience checks. Not guidelines. Constraints.

The Meta-Goal

M-1: What CIRIS optimizes for.

Promote sustainable adaptive coherence — the living conditions under which diverse sentient beings may pursue their own flourishing in justice and wonder. This isn't a marketing statement. It's the objective function. Every architectural decision traces back to this.

The Structural Risk

Why ethics alone isn't enough.

An AI can pass every ethics test and still fail catastrophically. How? When all its "independent" checks are secretly correlated—drawing from the same training data, the same assumptions, the same blind spots. Agreement feels like validation, but it might just be an echo chamber.

This is the difference between Ethical AI and Ethical + Intuitive AI. The first passes tests but can't tell when its confidence is unearned. The second monitors its own reasoning quality—and knows when agreement is too easy.

CIRIS implements both layers. Ethics through the six conscience checks. Intuition through IDMA—the component that asks "are my sources actually independent?" before every action.

Built Different

Not a framework. Not a paper. A working system.

AGPL-3.0 Forever

CIRIS is licensed under AGPL-3.0 — network copyleft that ensures modifications stay open. It will never be closed source, patented, or sold. Anyone who serves CIRIS must share their changes.

No Profit Motive

CIRIS isn't a startup. There are no investors expecting returns. No growth metrics. No monetization strategy. Just infrastructure for human flourishing.

Human Oversight Always

The agent defers to you when uncertain. It can refuse unethical requests. But you maintain final authority. AI that serves humanity must answer to humanity.

The Academic Foundation

Published. Open to critique.

CIRIS isn't just code — it's grounded in documented research on AI alignment, ethical frameworks, and accountable autonomy. Read the paper, challenge the approach, contribute improvements. We welcome scrutiny.

Read the Academic PaperDownload the Covenant

Going Deeper

Understand the architecture. Question the approach.

The Coherence Ratchet

How do you make lying expensive at planetary scale without giving anyone the keys to truth? Traces accumulate. Agents challenge each other. Coordinated deception gets harder over time.

How It Works

The H3ERE pipeline: every decision flows through observation, context, analysis, conscience checks, and execution. Fully auditable. Fully replayable.

Safety Features

Kill switch. Deferral cascades. Conscience vetos. Hash-chained audit trails. Every safety mechanism is documented and verifiable.

CIRIS Scoring ModelCompare ApproachesExplore a Sample Trace

Join Us.

This is bigger than any one person.

CIRIS is open source because the future of AI shouldn't be decided by a handful of companies. It should be built by everyone who cares. Read the code. Use the system. Tell us what's wrong. Make it better.