Book: Auditing AI
Published:
Sixteen years ago when I jammed myself into half a desk south of the Thames for job working for a keyboard company, I couldn’t imagine that my very first task would become a trillion dollar industry that world leaders and even the Pope would regularly opine on. As one of the earliest developers at the predictive text startup SwiftKey, I wasn’t asked to work on the neural network that learned to generate text in someone’s style. I wasn’t asked to work on the user interface or customer support. Instead, I was given a deceptively simple question: how can we tell when a new AI product is better than the last, and detect when a model has gone off the rails in harmful ways?
This month, I’m excited to share the publication of the book I wish I had back then: Auditing AI. Together with a dream team of scientists, policy experts, lawyers, and journalists, we have distilled decades of combined experience in AI evaluation into a get-up-to-speed book for anyone wanting to understand AI evaluation. This book is a quick introduction to the world of AI evaluation for business and nonprofit leaders, lawyers, government officials, journalists, and anyone who finds themselves asking: how can I tell whether this software is doing what I expect and not causing trouble?
To help leaders know how to think about AI evaluation, the chapters in the book use stories to explain the core questions any leader needs to know:
- What is AI Auditing?
- What kind of work is involved in an AI evaluation?
- How can leaders make sense of audit results?
- What actions does an evaluation make possible?
At the time of publication, people around the world face deep uncertainty about the degree to which AI will disrupt much of what we have come to accept as normal, and potentially make the injustices of the world worse. We hope this book de-mystifies the work of answering those questions. We end the book by imagining a future where AI reliably supports organizational goals because evaluations help people trust what AI systems do.
If you visit my office at Cornell University, you will pass by the sweetest spot on campus, the legacy of another moment of deep upheaval. In the 19th century, the Cornell University College of Agriculture and Life Sciences was tasked by the people of New York State with making the food system safe for consumers and workable for farmers. As I’ve written elsewhere, industrial-scale food production had the potential to transform nutrition, but nonexistent safeguards made shopping for grocery a potentially-poisonous gamble and risky for producers. It took a decades-long popular movement, creative business innovation, government regulation, and scientific advances to rebuild trust in the food supply. At the Cornell Dairy Bar, we honor this history with the glass walls of the ice cream production line.
That’s why when I got my copy of the book, I knew exactly where I wanted to take a photo. If we’re successful, I hope our work will become a footnote to people who can’t imagine a time when AI wasn’t safe, fair, and reliable. In the meantime, I hope our book helps you work toward that future.
I am deeply grateful to my co-authors Marc Aidinoff, Lena Armstrong, Esha Bhandari, Ellery Roberts Biddle, Motahhare Eslami, Karrie Karahalios, Danaé Metaxa, Alondra Nelson, Kristen Vaccaro, and especially Christian Sandvig, who convened us at the Institute for Advanced Study to write this book together.
