Skip to content

Insights

Analysis and thought leadership on the governance gap, behavioral failure modes, and the emerging standards landscape.

  • The Governance Gap

    Why observability, security, and compliance are necessary but insufficient for governing AI agents. Defines the structural gap between existing operational layers and explains why filling it requires a dedicated governance methodology.

  • 8 Ways AI Agents Fail

    Eight documented categories of AI agent behavioral failure that are invisible to standard monitoring — extracted from forensic analysis of real incidents in production multi-agent operations.

  • Why Observability Is Not Enough

    Observability answers what happened — it cannot answer whether it should have happened. This article argues that the gap between telemetry and governance requires a dedicated governance layer, not better monitoring.

  • AI Agent Governance and the SOC 2 Precedent: Lessons for an Emerging Control Layer

    An examination of how the SOC 2 precedent applies to AI agent governance — how operational control frameworks emerge from practitioner need, independent validation, and eventual standardization.

  • DeepMind Delegation Paper Analysis

    Analysis of DeepMind's 'Intelligent AI Delegation' paper (arXiv:2602.11865) and its independent theoretical alignment with the operational governance architecture at aiagentgovernance.org.

  • Largely Untested

    What happens when proposed AI agent governance interventions are tested in continuous production operations. Operational evidence from the IAPS field guide's five intervention categories — alignment, control, visibility, security, and societal integration.

  • Matched Pair: When the Governance Framework Caught Its Own Builder

    A complete audit trail showing plan vs. execution across a governed AI agent workstream — including the incident where an agent deviated, was caught, and was corrected in real time.