Bounded AI Autonomy is live

I have just published a new book on Leanpub: Bounded AI Autonomy.

The subtitle says what it is really about: securing enterprise AI agents you can defend to regulators, auditors, and customers.

My opinion is blunt: the hard part of enterprise AI is no longer getting a model to answer. The hard part is deciding what an agent is allowed to do, what evidence it must leave behind, and who owns the outcome when the system acts inside a real workflow.

Bounded AI Autonomy book cover

The risk changed when AI started using tools

A chatbot can be wrong. That is bad enough.

An agent can be wrong while calling tools, reading internal knowledge, touching tickets, querying customer data, writing code, changing workflow state, or escalating a decision. That is a different risk model.

This is where a lot of enterprise AI programs are still too casual. They treat autonomy as a product feature and security as a review stage. I think that order is backwards. For agents, the security model is part of the product.

If the boundary is vague, the agent has already inherited too much trust.

What the book covers

The book is practical. It covers AgentSecOps, MCP governance, RAG controls, identity, evals, policy, approval flows, audit trails, and the operating model around production agents.

Some of the topics inside:

  • why agentic AI changes the risk model from answer quality to delegated authority
  • tool misuse, prompt injection, permission escalation, data leakage, cost explosions, audit gaps, and silent workflow corruption
  • how to design agent boundaries before you widen autonomy
  • how MCP changes the capability surface of an AI system
  • how to build evidence that helps engineering, security, risk, and audit have the same conversation

The book is written for teams that want useful AI agents without pretending that “human in the loop” magically fixes every bad boundary.

Who should read it

If you are building AI agents in a regulated company, financial-services environment, enterprise platform team, security function, or internal automation group, this is for you.

It is also for technical leaders who are being asked the uncomfortable questions:

  • What exactly can the agent do?
  • Which tools can it call?
  • What data can it see?
  • How do we know it stayed inside policy?
  • What happens when the model, prompt, tool, or retrieved context changes?
  • What evidence would we show an auditor or customer?

Those questions are not blockers. They are the work.

Buy the book

You can get the book here:

Buy Bounded AI Autonomy on Leanpub

I will keep expanding it with practical patterns, examples, and operating templates as the field moves. But the central argument will not change: autonomy only becomes enterprise-ready when it is bounded, observable, and accountable.