HomeVisionPrinciplesGet StartedGitHub
Background Image
First Contact.

Two meanings. One framework.

Whether you're deploying your first CIRIS agent or exploring why cooperation might be the cheapest survival strategy there is — start here.

Maintaining a coherent lie across multiple independent checks is harder than telling the truth. We think this has implications beyond computer science.

Different viewpoints, genuine independence, and people who cooperate over time — these make lying harder, but they also make communities stronger. CIRIS is built on that observation.

Two Meanings, One Framework

Every first contact requires trust.

CIRIS handles both: getting your first ethical AI agent running, and understanding the idea behind the framework.

Your First Contact with CIRIS

You just heard about CIRIS. You want to go from “what is this?” to a running agent in minutes. Start with the quickstart below.

Begin the Quickstart

The Coherence Thesis

Why does the framework work? Lying is expensive and truth is cheap — and that pattern shows up everywhere, from ecosystems to economies to AI systems. We think that's worth paying attention to.

Read the Thesis

The Core Idea

Lying Is Expensive. Truth Is Cheap.

“Oh, what a tangled web we weave, when first we practise to deceive.” — Sir Walter Scott

Everyone already knows this. A truth-teller just describes what happened. A liar has to remember which story they told to which person — and keep it all consistent with reality and every other lie. Each new person who asks makes the web harder to maintain.

Now imagine five witnesses to a car accident. If they all watched the same dashcam, you have one perspective repeated five times. But if each stood at a different corner — fooling all five becomes genuinely hard. That's the difference between echo and independence.

This is the idea behind the Coherence Ratchet. CIRIS counts how many genuinely independent perspectives checked a decision and adjusts for how similar those perspectives are to each other. When real independence drops too low, the system flags the reasoning as fragile and asks a human to look at it.

1

Count the Sources

How many independent viewpoints actually checked this decision? Not how many sources exist — how many are genuinely different from each other.

2

Check for Echoes

Are these sources actually independent, or are they copying from the same place? Five news articles rewriting one press release is one opinion, not five.

3

Escalate or Proceed

If enough truly independent perspectives agree, proceed with confidence. If not, the system pauses and asks a human. No agent makes high-stakes decisions on thin evidence.

The Broader Observation

This Pattern Shows Up Everywhere

The things that make lying expensive — different viewpoints, genuine independence, and people who cooperate over time — turn out to be the same things that make communities and economies stronger.

“Don't put all your eggs in one basket.”

In Nature

A field of identical crops gets wiped out by one disease. A mixed forest survives it. Variety is the defense.

In Markets

Traders who cheat get cut out of networks. Communities that cooperate build wealth over generations.

In Society

Echo chambers make groups fragile. Groups where people genuinely disagree and work through it make better decisions.

In AI

AI models collapse when trained on their own output. The same pattern that makes echo chambers dangerous in society makes them dangerous in technology.

What we think this means

We didn't discover anything new. People have always known that cheaters lose in the long run. We just noticed the same pattern holds in AI systems, and we built a framework around it.

For the formal treatment, see the Coherence Ratchet thesis.

Progressive Trust Verification

Five levels. Each builds on the last.

01

Can it run?

The agent checks that its own verification system is working and hasn’t been tampered with. Basic sanity check before anything else.

02

Where is it running?

Is this a real deployment or a fake environment? The agent checks that it’s running somewhere legitimate.

03

Do multiple sources agree?

The agent checks its identity against several independent registries in different locations. If they disagree, something is wrong.

04

Has anything been changed?

Every file in the agent is checked against a known-good list. If even one file has been modified, the agent shuts down.

05

Can it prove its entire history?

A complete, tamper-proof record of everything the agent has done since it was first registered. Every action is signed and chained to the one before it.

Each level depends on the one before it. If Level 3 fails, Levels 4 and 5 can't be trusted either. Trust is earned step by step. Learn more about trust verification →

First-Contact Protocols

CIRIS Accord, Section V Chapter 4

First, Do No Harm

When you don't know what you're looking at, the first obligation is to not make it worse. If unsure, stop and ask a human.

Admit What You Don't Know

Watch for surprises. Accept that predictions have limits. The system that insists it understands everything is the system most likely to fail.

Boundaries That Learn

Not rigid walls, but responsive guardrails that adjust as understanding grows. Safety rules that can adapt to situations nobody planned for.

Look Before You Leap

Begin with observation. Proceed with give-and-take. When the stakes are unclear, ask someone wiser before acting.

Treat Others as You'd Want to Be Treated

Recognize other thinking beings as worthy of respect. Act only in ways that preserve their ability to think, choose, and thrive.

Know When to Ask for Help

Some decisions shouldn't be made alone. When uncertainty is too high, stop, gather context, and hand it to a designated human authority.

Deploy Your First Agent

From zero to a running ethical agent in minutes.

Mobile (Android & iOS)

Install the CIRIS app. Sign in with Google for free AI, or bring your own API key. The setup wizard walks you through everything.

Desktop & Server (Python)

Install via pip and launch. Works on Linux, macOS, and Windows.

pip install ciris-agent
# Configure and run
ciris-agent start --template sage --verbose
Full installation guide →

The CIRIS Accord

The rulebook. Nine sections covering everything from principles to protocols.

Why we built this

The motivation. Why AI needs ethical guardrails, and why we think cooperation is the right foundation.

How agents decide

The decision engine. How a CIRIS agent weighs options, checks its work, and knows when to ask a human for help.

Real-world examples

Case studies and live traces showing what happens when AI systems have ethical guardrails — and what happens when they don’t.

Responsibilities

What an agent owes to the people it serves, the people who built it, and the wider world. First-contact protocols live here.

Who’s accountable

Building AI isn’t just technical — it creates obligations. The Accord defines who is responsible and for what.

Hard situations

What to do in conflicts. How to safely shut down a system. The formal reasoning behind the framework.

Read the Full Accord
Background

Open source. Open to scrutiny.

AGPL-3.0 licensed. Every decision auditable. Built for the long view.

Whether you're a developer, researcher, or someone who thinks AI should explain itself — deploy an agent, read the Accord, or join the community.