by Nick Bostrom
Rating: ★★★★
Bostrom presents a competent synthesis of the (2014) state-of-the-art in AI risk theory. He covers all the major bases, from the danger that unfriendly AI poses to humanity to Hanson's EM multipolar scenarios, to Yudkowsky's CEV proposition. Anyone who has followed the field is unlikely to find anything conceptually new here, but some of the detail and analysis is still worthwhile reading, particularly because Bostrom's presentation is skillful and clear where original authors might be dense or confusing, and he connects several threads that would otherwise require much broader reading. In short, it is an excellent introductory text.
The book also positions itself as a form of policy briefing, making a case for the topic in language suited to government advisors and pragmatic business owners. Bostrom is taking the opportunity to make some outreach about the risk and high-level strategy to respond to it. I am not sure how well he accomplishes this. Certainly it is easy to follow him if you have already intuited the dangers of an unfriendly superintelligence, but I think Bostrom's pragmatic attempt to avoid graphically illustrating specific failure cases backfired somewhat (I see some high-ranked reviews on Goodreads that seem to demonstrate that the readers didn't properly understand the problems with any boxing attempt, or think that we should 'nurture' UFAIs rather than control them; of course, Bostrom cannot be held to fault for everyone who failed to read the book closely enough). The argument that fast takeoff is probable was made, but perhaps not presented strongly enough to convince the dubious. There's a sense in which the book is preaching to the choir or to those who are immediately about to join it.