Thomas Talks AI
I write about Claude Code, agentic coding, AI coding agents, evals, flight recorders, and the engineering work that starts after the demo works.
If you are trying to move AI code generation from a clever local experiment into a team workflow, this site is about the part that decides whether it survives: permissions, review, observability, rollback, and production discipline.

New LeanPub book
Claude Code: Building Production Agents That Actually Scale
The practitioner's guide to Claude Code in production: agent loops, tools, hooks, MCP, permissions, evals, observability, cost engineering, and human review for systems that need to hold up under real pressure.
Written for senior engineers, technical leads, and architects moving beyond local experiments into production agent workflows.
Start here
- Agentic coding in production: the operating model around AI coding agents.
- AI coding agents: Claude Code, software agents, permissions, evals, and review loops.
- Claude Code production checklist: a practical checklist before agents touch serious repos.
- Claude Code book: the full field guide for production agents.
What I am trying to answer
Claude Code and other AI coding agents are already useful. The harder question is what happens when they meet real repositories, review habits, permissions, tests, and production risk.
The strongest posts so far:
- Claude Code is not the product. The production loop is.
- Claude Code agents need a flight recorder
- Claude Code permissions: the production mistake that bites later
- Claude Code evals should start with bad runs




