Published report
OpenClaw 2026: governed vs ungoverned agent behavior in a controlled run
A controlled comparison showing what changes when the system moves from prompt-only constraints to enforceable tool-boundary control with evidence capture.
Independent research and operating notes on AI agent governance.
CAISI / Research + Operating Model
We publish independent, reproducible research on AI agent governance and write practical operating notes for teams trying to ship agentic systems without losing control.
About
We at CAISI publish independent, reproducible research on AI agent governance. Every headline claim is backed by machine-generated artifacts, deterministic queries, and open methodology. The point is not to add more rhetoric to the market. The point is to make the control problem visible and measurable.
Our research pages publish artifacts and evidence. Our blog explains the operating model behind those results: repo contracts, orchestrator design, sandbox isolation, evaluation discipline, and proof of work for AI-generated change.
Research
Published report
A controlled comparison showing what changes when the system moves from prompt-only constraints to enforceable tool-boundary control with evidence capture.
Published report
An `890`-target publication subset showing that public AI and agent adoption is easy to detect, but approved, deployable, and well-evidenced use is much harder to prove.
Blog
The CAISI blog is organized as a unified system with separate collections. The operating-model series explains the general framework. The OpenClaw series extracts lessons from one controlled runtime case study. The sprawl series translates a public measurement report into governance lessons for security and platform leaders.
Series one
A 10-part framework on repo contracts, orchestration, isolation, evaluation, proof, and maturity.
Series two
A 4-part case-study collection on stop behavior, discovery limits, boundary enforcement, and scope discipline.
Series three
A 4-part collection on approval opacity, evidence posture, deployability, and how to read public AI adoption data without overclaiming.
Featured post
The orchestrator model for turning work items into validated PRs with audit trails and human review states.
Featured sprawl post
Why a public repo that declares agent frameworks is not the same thing as a repo that exposes a governed deployable agent.
Featured case-study post
The clearest lesson from the OpenClaw run: stop has to be a runtime state, not a polite request.
Team
David Ahmann, Head of Cloud, Data and AI Platforms, CDW Canada (LinkedIn)
Devan Shah, Chief Software and Data Security Architect, IBM (LinkedIn)
Talgat Ryshmanov, Principal DevSecOps Consultant, Adaptavist (LinkedIn)
Contact
For research questions, publication inquiries, or collaboration around reproducible AI governance work: research@caisi.dev