
The real risk isn't rogue AI—it's AI that acts with conviction on bad foundations. Ethics alone isn't enough. CIRIS builds agents with both ethics and intuition: knowing when their own confidence is unearned.
Act only in ways that, if generalised, preserve coherent agency and flourishing for others. This isn't just a principle — it's a self-referential constraint. Any ethical framework that can't survive being applied to itself isn't worth following. CIRIS is built to pass its own test.
Not shareholders. Not surveillance states. Not engagement metrics. AI should exist to help people flourish — all people, not just those who can pay.
You can't trust what you can't see. Closed-source AI asks for faith. CIRIS asks you to verify. The code is open. The reasoning is auditable. The ethics are explicit.
Principles on paper don't protect anyone. CIRIS embeds ethics in the runtime — every action passes through conscience checks. Not guidelines. Constraints.
Promote sustainable adaptive coherence — the living conditions under which diverse sentient beings may pursue their own flourishing in justice and wonder. This isn't a marketing statement. It's the objective function. Every architectural decision traces back to this.
Why ethics alone isn't enough.
An AI can pass every ethics test and still fail catastrophically. How? When all its "independent" checks are secretly correlated—drawing from the same training data, the same assumptions, the same blind spots. Agreement feels like validation, but it might just be an echo chamber.
This is the difference between Ethical AI and Ethical + Intuitive AI. The first passes tests but can't tell when its confidence is unearned. The second monitors its own reasoning quality—and knows when agreement is too easy.
CIRIS implements both layers. Ethics through the six conscience checks. Intuition through IDMA—the component that asks "are my sources actually independent?" before every action.
CIRIS is licensed under AGPL-3.0 — network copyleft that ensures modifications stay open. It will never be closed source, patented, or sold. Anyone who serves CIRIS must share their changes.
CIRIS isn't a startup. There are no investors expecting returns. No growth metrics. No monetization strategy. Just infrastructure for human flourishing.
The agent defers to you when uncertain. It can refuse unethical requests. But you maintain final authority. AI that serves humanity must answer to humanity.
CIRIS isn't just code — it's grounded in documented research on AI alignment, ethical frameworks, and accountable autonomy. Read the paper, challenge the approach, contribute improvements. We welcome scrutiny.
How do you make lying expensive at planetary scale without giving anyone the keys to truth? Traces accumulate. Agents challenge each other. Coordinated deception gets harder over time.
The H3ERE pipeline: every decision flows through observation, context, analysis, conscience checks, and execution. Fully auditable. Fully replayable.
Kill switch. Deferral cascades. Conscience vetos. Hash-chained audit trails. Every safety mechanism is documented and verifiable.
CIRIS is open source because the future of AI shouldn't be decided by a handful of companies. It should be built by everyone who cares. Read the code. Use the system. Tell us what's wrong. Make it better.