If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
, pages
This book got me incrementally closer to understanding some of the fundamentals of AI and the controversial risks it supposedly poses to humanity. I could feel the authors’ struggle to explain the mechanisms of AI in simplistic terms that someone like me could understand. The approach was borderline insulting, but again, I understand the challenge of the task. Their writing style—particularly some of their sentence structures— would have benefited from a stronger editor. My primary takeaway from the book was AI “builders” (or “growers”, as they would say) can steer an AI, but they can’t fully control where it goes. And they can’t fully control where it goes because the data is so enormous, humans can process it all nor understand it all. It’s a bit like, well from what we know we can predict our training will lead the AI here, but here is the general target and not the bulls-eye. The authors’ attempted to postulate a full scenario where AI becomes superintelligent and eventually kills us off, but it wasn’t very enlightening. Each step in the process begged for more clarity. Clarity the authors couldn’t provide because it is technically complicated and the book would end up being unwieldy.

