CAISI / Research + Operating Model

Clyra AI Safety Initiative

We publish independent, reproducible research on AI agent governance and write practical operating notes for teams trying to ship agentic systems without losing control.

Control before execution Deterministic validation Proof over narrative

About

Research that can be checked

We at CAISI publish independent, reproducible research on AI agent governance. Every headline claim is backed by machine-generated artifacts, deterministic queries, and open methodology. The point is not to add more rhetoric to the market. The point is to make the control problem visible and measurable.

Our research pages publish artifacts and evidence. Our blog explains the operating model behind those results: repo contracts, orchestrator design, sandbox isolation, evaluation discipline, and proof of work for AI-generated change.

Research

Published and in-progress studies

Blog

Blog collections

The CAISI blog is organized as a unified system with separate collections. The operating-model series explains the general framework. The OpenClaw series extracts lessons from one controlled runtime case study. The sprawl series translates a public measurement report into governance lessons for security and platform leaders.

Team

CAISI contributors

David Ahmann, Head of Cloud, Data and AI Platforms, CDW Canada (LinkedIn)

Devan Shah, Chief Software and Data Security Architect, IBM (LinkedIn)

Talgat Ryshmanov, Principal DevSecOps Consultant, Adaptavist (LinkedIn)

Contact

Get in touch

For research questions, publication inquiries, or collaboration around reproducible AI governance work: research@caisi.dev