Bounded AI Autonomy
New Leanpub book
Secure enterprise AI agents before autonomy becomes liability.
Bounded AI Autonomy is for teams putting AI agents into serious environments: regulated workflows, customer data, internal tools, financial-services controls, and systems where "the model did it" is not an acceptable answer.
My opinion is simple: autonomy without a boundary is not innovation. It is unmanaged delegation. The book shows how to make agentic systems defensible with AgentSecOps, MCP governance, identity, RAG controls, evals, policy, audit trails, and operating evidence.

Who it is for
This book is written for engineering leaders, security architects, AI platform teams, auditors, risk owners, and product teams who need agents to do useful work without giving them a blank cheque.
It is especially relevant if your agents can use tools, call APIs, retrieve internal knowledge, touch customer workflows, open tickets, write code, change data, or influence regulated decisions.
What the book covers
- why agentic AI changes the risk model from “answer quality” to delegated authority
- the failure modes that matter in production: tool misuse, data leakage, permission escalation, prompt injection, silent workflow corruption, cost explosions, audit gaps, and model regression
- an enterprise operating model for ownership, review, escalation, and evidence
- secure agent architecture across orchestration, identity, tools, RAG, policy, logging, and human approval
- MCP and tool governance as capability design, not plugin convenience
- evals, red-team scenarios, audit trails, and release gates for agentic systems
Why now
The market is moving quickly from chat interfaces to systems that can act. That is the dangerous step. A chatbot can be wrong. An agent can be wrong with credentials, tools, memory, workflow authority, and speed.
Regulated teams cannot wait until auditors ask awkward questions. They need an operating model before the first high-trust agent quietly becomes part of production.