screenshot of an ebook that i wrote called AI Safety Cookbook. text reads:

AI is writing emails, diagnosing patients, approving loans, teaching students, reviewing legal contracts. When it screws up, people get hurt. Not “oops” hurt—actual harm to their health, finances, education, legal rights.

Most organizations deploying AI don’t know how to test it. The available resources are either dense research papers or vague “best practices” that tell you nothing practical.

This book gives you recipes you can use today. Run your first safety test in 15 minutes with just a web browser. Find prompt injection vulnerabilities. Test for bias. Build systematic evaluation processes. Scale it up with automation. Everything here is copy-paste ready—actual prompts, tool configs, real examples of things breaking.
Who should read this

Product teams shipping AI features. Smaller companies without research labs. Red teamers who need to break AI systems. Domain experts (doctors, lawyers, teachers) evaluating AI in their field. Policy makers trying to figure out what “AI safety testing” actually means.

You don’t need to be technical. Chapter 1 works if you can use a web browser. Later chapters get more technical, but the core ideas stay accessible.
screenshot of an ebook that i wrote called AI Safety Cookbook. text reads: AI is writing emails, diagnosing patients, approving loans, teaching students, reviewing legal contracts. When it screws up, people get hurt. Not “oops” hurt—actual harm to their health, finances, education, legal rights. Most organizations deploying AI don’t know how to test it. The available resources are either dense research papers or vague “best practices” that tell you nothing practical. This book gives you recipes you can use today. Run your first safety test in 15 minutes with just a web browser. Find prompt injection vulnerabilities. Test for bias. Build systematic evaluation processes. Scale it up with automation. Everything here is copy-paste ready—actual prompts, tool configs, real examples of things breaking. Who should read this Product teams shipping AI features. Smaller companies without research labs. Red teamers who need to break AI systems. Domain experts (doctors, lawyers, teachers) evaluating AI in their field. Policy makers trying to figure out what “AI safety testing” actually means. You don’t need to be technical. Chapter 1 works if you can use a web browser. Later chapters get more technical, but the core ideas stay accessible.