No evidence, no emission.
AI doesn't show its work.You act on it anyway.
Kenshiki Labs is a runtime AI governance control plane that binds identity, scopes evidence, verifies claims, and gates what can leave before anyone acts on an AI answer.
Use it with GPT, Claude, Kadai, or other OpenAI-compatible models to keep retrieval, prompting, verification, and release decisions inside a governed evidence boundary.
Operational Reality
The failure modes nobody planned for.
AI doesn't fail the way software usually fails. There's no stack trace, no crash report, no red banner. It fails by sounding exactly right. The same model that drafts a useful summary can fabricate case law, hallucinate intelligence, and leak sensitive data — and nothing in the output tells you which answer is which.
Curated from the AI Incident Database and Kenshiki Labs operational baselines for high-stakes systems.
1,425+
Documented AI failures in the AI Incident Database
36%
Impacting vulnerable populations
Zero
Margin for error in the decisions that matter
Why this doesn't get better with bigger models
More parameters don't fix the problem.
Here's the counterintuitive part. The better a model gets, the worse the problem becomes. A smarter model doesn't make fewer mistakes you can catch — it makes mistakes you can't.
Scale gives you fluency, range, and sophistication. Not ground truth. The answers get more confident. The reasoning gets tighter. And the errors disappear into prose so clean you stop questioning them.
Kenshiki Labs starts from a different assumption: our Kadai model is a synthesizer, not an authority. Consequential claims should be backed by governed evidence before teams rely on them.
You don't have to take us at our word.
Chat with KadaiHow it runs
Three ways to run it.
Kenshiki Labs can wrap the AI you're already using or provide shared Kadai models. What changes is how much control, telemetry, and audit depth you want around the answer.
Start with shared Kadai or bring your own model — GPT, Claude, whatever you're running. Kenshiki Labs pulls in real sources, constrains what gets said, and evaluates the answer against evidence before it reaches operators.
See deployment details →Workshop /01
Same system, your infrastructure. Data stays inside the deployment boundary. Answers are still evaluated against governed evidence before teams act on them.
See deployment details →Refinery /02
For environments that can't connect to anything outside. Model, data, and evaluation all run inside the boundary, with local records and attestation.
See deployment details →Clean Room /03
Guided entry
Need a faster starting point?
Pick the clearest path for evaluation, architecture, or governance without turning the homepage into a sitemap.
Who this is for
For people who can't afford to be wrong.
When a fluent answer can move money, shift care, expose intelligence, or trigger legal consequences — confidence without evidence is a liability.

Cross-Industry Obligations
Kenshiki Labs helps any team facing privacy, security, recordkeeping, diligence, insurer, or regulator scrutiny turn broad obligations into evidence-backed runtime controls from Workshop through Clean Room.

Defense & Intelligence
Kenshiki Labs provides evidence-backed briefings with inspectable sourcing before intelligence products leave the system.

Government & Public Sector
Kenshiki Labs helps public-sector teams show what evidence supported an output before policy, oversight, or casework moves forward.

Healthcare & Life Sciences
Kenshiki Labs bounds clinical and research outputs to governed evidence before recommendations influence care or safety decisions.

Regulated Enterprise
Kenshiki Labs gives regulated teams an audit-ready record of what evidence supported a risk, compliance, or policy output before anyone acts on it.
How Kenshiki Labs works
Kura and Kadai are the product. The rest makes them trustworthy.
Kura stores what counts as real. Kadai answers inside that boundary. The supporting system governs identity, scope, verification, and release.
The Kenshiki Labs architecture is a six-stage runtime pipeline that stores governed evidence, binds identity, constrains questions, generates answers inside a boundary, verifies each claim, and gates what can leave.
Core pair
Kura
You put source material into Kura. The system preserves provenance, structure, and SIRE-tagged retrieval boundaries — the portable agent identity that follows every source across runtimes — so every downstream answer traces back to something real.
Kadai
Kadai is the reasoning API. It synthesizes across what Kura contains, but it is never the authority. The evidence is.
Control plane
Prompt Sanitizer
Every request enters through the Prompt Sanitizer. It authenticates the caller, binds identity and access through OpenFGA/REBAC, and establishes the evidence boundary every downstream stage inherits.
Prompt Compiler
Before the model sees anything, the Compiler narrows the question to what can actually be answered from evidence and hands Kadai a disciplined query.
Claim Ledger
Every answer gets decomposed into individual claims. Each one is checked against the evidence so the system can distinguish what is supported, missing, or contradicted.
Boundary Gate
The Kenshiki Gate is the final boundary gate between a fluent answer and a decision-grade release. It reads the ledger result and decides what is allowed to leave.
Frequently asked questions
What Kenshiki Labs is, how it works, and where it fits.
These are the core operating questions buyers, evaluators, and answer engines should be able to resolve from the homepage without guessing.
What is Kenshiki Labs?
Kenshiki Labs is a runtime AI governance control plane that binds identity, scopes evidence, verifies model claims against governed sources, and gates what can leave before an answer reaches operators.
How does Kenshiki Labs govern AI outputs?
Kenshiki Labs governs AI outputs by authenticating the caller, limiting retrieval to authorized evidence, compiling a governed prompt, checking each claim against that evidence, and applying a final boundary gate before release.
What is the Claim Ledger in Kenshiki Labs?
The Claim Ledger is Kenshiki Labs' tamper-evident audit trail for AI inference. It records what evidence was retrieved, which checks ran, what each claim said, and why an output was authorized, degraded, or blocked.
What is the Boundary Gate in Kenshiki Labs?
The Boundary Gate is the final emission checkpoint in Kenshiki Labs. It reads the Claim Ledger result and decides whether a response is authorized to leave, must be degraded, or should be blocked.
Does Kenshiki Labs work with GPT and Claude?
Yes. Kenshiki Labs is model-agnostic and works with Kadai plus OpenAI-compatible backends such as GPT and Claude, because the governance layer sits around the model rather than inside the weights.
Who is Kenshiki Labs for?
Kenshiki Labs is for teams operating in high-stakes environments such as defense, government, healthcare, and regulated enterprise workflows where unsupported AI outputs create operational, legal, or safety risk.
Next Step
Ready to see it as a product?
Start in Workshop to see a governed response, compare runtime governance environments on pricing, or inspect the architecture that enforces the contract.