Jump to ratings and reviews
Rate this book

Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World

Rate this book
A compelling and clear exploration of the power and peril of advanced artificial intelligence that provides actionable suggestions for all citizens to create a safer future with AI.


We are living in a world of rapid change and technological progress.

Artificial intelligence is poised to be the most significant development to affect our lives now, and in the coming years. More powerful than nuclear bombs and as impactful as electricity, fire, and oil rolled into one, advanced AI might bring us untold wonders but it could also be a threat to our jobs, our relationships, and our place in the world.

Is artificial intelligence dangerous? How does artificial intelligence work? Will artificial intelligence take over?

Uncontrollable uses engaging analogies and relatable examples to summarize AI for beginners, and unpacks AI risk and safety for readers without a technical background.

Uncontrollable examines artificial intelligence as a concept and technology, describes what AI is, how image generators and language models work, and how we don’t fully understand what is happening in these AI systems. It provides evidence to show that artificial superintelligence presents a risk to humanity, and demonstrates that it will be very difficult to understand, control, or align as it rapidly increases in capabilities and becomes more integrated into how we work, live, and play.

We are not prepared.

Yet, we can be. Uncontrollable clearly communicates the urgency to act now and provides concrete suggestions for society and concerned citizens to create a safer future with AI.

Uncontrollable is a first-of-its-kind publication and call to arms to address the defining issue of our time.

354 pages, Paperback

Published November 17, 2023

14 people are currently reading
113 people want to read

About the author

Darren McKee

1 book8 followers
Darren McKee (MSc, MPA) is an author, speaker, and AI advisor. He has served as a senior policy advisor and policy analyst for over 15 years, and sits on the Board of Advisors for AIGS Canada, the leading safety and governance network in the country. McKee also hosts the international award-winning podcast, The Reality Check - a top 0.5% podcast on Listen Notes with over 4.5 million downloads.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
30 (57%)
4 stars
11 (21%)
3 stars
7 (13%)
2 stars
3 (5%)
1 star
1 (1%)
Displaying 1 - 23 of 23 reviews
Profile Image for Zarathustra Goertzel.
554 reviews42 followers
February 25, 2024
Uncontrollable is well-written. Darren writes for laypeople. For example, ASI is defined as a system that performs at expert level or above in all tasks. Basically, he defines AGI as "HLAI" (Human-Level AI) and ASI as "Super-HLAI".

The bulk of the book is summed up in the title: ASI may come before we expect it and we don't know that it won't kill us all. It may be "worse than nukes" and perhaps needs to be regulated at least as carefully, possibly with certificates needed to run models on powerful hardware, etc. The international bans on human cloning are also used as an example of how we can cooperate to put a stop on dangerous technology.

Darren seems to be what people call an "AI Doomer", i.e., someone who thinks the likelihood of ASI causing human extinction is possibly over 10% (and we should act as if it is). He uses such metaphors: would you use a phone that had a 10% chance of blowing your head off? No. So why is AI different? Because you think the likelihood is much, much lower? Maybe you're wrong. 🙃🤷‍♂️

I remain unconvinced and the metaphors seem off-base, so it's mainly unpleasant emotional needling in my eyes.

The tone is very polarizing: there is no neutrality. Not acting as if there's a serious risk of absolute doom and doing everything possible to prevent this, including clamping down on AGI tech like nuclear tech to keep it highly controlled, is essentially taking the stance that it's not such a big issue -- and can you justify that claim? My impression is that Darren might choose to ban AGI research if we cannot guarantee it will be safe. I could be wrong, but that's how he comes across in the book.

Some of the advice leaning in the transparency direction is reasonable. People working toward AGI can publish the system's capabilities, their expectations, and we could even have audits of them.

I don't think focusing on preventing extinction via ASI is an effective approach to developing beneficial AGI while setting up systems to mitigate risks. And I think there are more plausible risks that should get more attention. I see that Darren is well-intentioned and cares a lot, so I hope we can find some way to cooperate productively.

I recommend reading The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma instead if you'd like a concerned yet much, much, much more balanced take that focuses on more plausible concerns.

On the topic, Human Compatible: Artificial Intelligence and the Problem of Control also provides a balanced treatment of the topic with grounded suggestions.
Profile Image for Adam.
260 reviews15 followers
December 1, 2023
An engaging and accessible investigation of what could very likely be the most important topic of our time. Could anything be more important than saving the world?

I've been concerned about AI safety for about a decade and have come to inform a lot of my beliefs on the topic from conversations with the author, my friend Darren McKee, so it's no surprise that our views align on this! If only alignment were always so easy. While I used to be skeptical of the doomsday predictions, while investigating the topic it became apparent that those who thought this through had concerns about safety and those that dismissed them were usually committing some sort of error in thinking. I've taken the topic seriously since then and have found that it's often difficult to convince others to do the same.

Enter Uncontrollable! A great introduction to the topic for someone who has heard about it but doesn't know where to start. A great way to demystify this issue with a very clear explanation of what it is and why you should be concern. The reader is not being persuaded by tricks but instead brought to the inevitable conclusion that we should at the very least be cautiously concerned about the potential achievement of artificial superintelligence in our lifetime and how to align it without goals with humanity's best interests in mind.

Beyond just the introduction to the topic this serves as a deep exploration for those experience on the issue and gives concrete examples of what anyone can do about it. While I've long appreciated the concerns about AI safety it isn't until recently that I've started to see recommendations for actions that have some validity. This book will not only give you someone to start but also encourage you to continue to think about and prepare for this potentially disruptive change to our way of life.

I recommend this to anyone, regardless of your experience or interest level on the topic. If you're into it, you'll love it. If you aren't, you should be learning more about it!
2 reviews
November 24, 2023
A great resource for making the case that advanced artificial intelligence might be dangerous. UNCONTROLLABLE methodically breaks down the argument and uses engaging metaphors to explain each point, building back up to a comprehensive understanding of the issue.
Profile Image for Gatlen.
32 reviews
March 29, 2024
As an undergraduate with aspirations in the AI safety field, my encounter with Darren McKee was at a conference where he delivered a compelling talk on AI safety. I note this because Darren is a credible author, well established within the AI safety community and has the authority to speak on the intricacies of AI safety.

“Uncontrollable” emerges as a stellar, accessible overview of the perils associated with superintelligence, underlining the urgency with which these dangers should be addressed. This book was able to connect recent events to the broader discourse on AI risk, which is extremely salient context. Darren’s adept storytelling and use of concrete examples transform abstract concepts into relatable insights enriched my understanding of AI safety and enabling me to converse more effectively on the topic with peers.

Distinctively, Darren’s writing not only illuminates the risks but also inspires with its optimism, offering readers actionable guidance to contribute positively to the future of AI safety.

Disclaimer: Although I received a complimentary audiobook from the author, my involvement in AI safety makes me more critical of literature on the topic.
1 review
December 7, 2023
Uncontrollable is an entertaining and thoughtful analysis of AI and its impacts, with more than enough novel insight to be a worthwhile read to other armchair experts.

More importantly, it's far and away the best overview available for guiding an intelligent non-expert to grapple with the idea of superintelligence. While I think it makes for a good and detailed overview to someone already steeped in the field, its greatest strength is as an introduction to the concept of superintelligent AI, and the risks to humanity that it might pose - essentially, as the solution to the problem that this critical topic strikes most people as too "sci-fi" to be worth investing serious thought in.

I strongly recommend this book to anyone interested in the field of AI generally, and AI risk specifically, but I recommend it even more strongly as a gift for that special someone on your list who really should be spending more time worrying about a paperclip-filled future :)
Profile Image for Mike Lawrence.
2 reviews
November 22, 2023
This is a great book. It's a thorough yet non-technical exploration of why citizens and policymakers should be concerned by the rapid development of AI capabilities in recent years.

I thought the writing style was very engaging and accessible, and the author clearly discussed the core arguments for why AI poses a different kind of risk than any past human endeavour.

The book discusses harms that have already occurred as well as providing thorough reasoning for the concern that even greater harms likely await us should we fail to anticipate and mitigate them.

I really like that the author provided concrete suggestions for public action and policy to guide the development of AI away from the current path of naive rush to risk, towards our common goals of universal human flourishing.
Profile Image for Zoé.
9 reviews
November 23, 2023
This book lays out arguments that clearly explain why and how future AI systems will pose extreme risks in the most accessible way for a non-technical audience. It's easy to follow and enjoyable to read/listen to
1 review
November 28, 2023
Absolutely a must-read if you haven't read any books on the danger of superintelligence! Easy to read and understand!!!
77 reviews75 followers
January 24, 2025
This is by far the best introduction to AI risk for people who know little about AI. It's appropriate for a broader class of readers than most laymen-oriented books.

It was published 14 months ago. In this rapidly changing field, most AI books say something that gets discredited by the time they're that old. I found no clear example of such obsolescence in Uncontrollable (but read on for a set of controversial examples).

Nearly everything in the book was familiar to me, yet the book prompted me to reflect better, thereby changing my mind modestly - mostly re-examining issues that I've been neglecting for the past few years, in light of new evidence.

The rest of this review will focus on complaints, mostly about McKee's overconfidence. The features that I complain about reduce the value of book by maybe 10% compared to the value of an ideal book. But that ideal book doesn't exist, and I'm not wise enough to write it.

Urgency

McKee predicts artificial superintelligence (ASI) soon.

He uses weaker than normal definitions of superintelligence ("more intelligent than most of us", and later "can complete any intellectual task at an expert human level or above"), but is pretty clear that he means something that might be powerful enough to destroy the world.

At the start of the book, he predicts ASI in 10 years. Later on, after giving the "expert level" definition, he hedges and says maybe it's 10 years, maybe it's 30. I guess it seems somewhat appropriate that his timelines sound different at different times, since that mirrors expert uncertainty fairly well. Beware that since the book was published, leading AI companies have been sounding increasingly confident that they'll have ASI within 5 years. Is that because they know more than other experts, or because they've been selected for optimism?

Misalignment: Asimov's Three Laws

McKee illustrates how ASI will likely be misaligned with human values, mainly by analyzing what happens if AIs follow Asimov's proposed laws of robotics.

Five years ago, I was pretty concerned about scenarios where AIs would need to have something like the 3 Laws programmed into them before they were smart enough to have a human-level understanding of those laws. Now we have clear examples of AIs that can understand those laws better than 90% of humans can, while their goals are still malleable enough that we can (partly?) instill new goals.

McKee says that Asimov's Laws "are not even remotely viable". I don't quite agree. They're problematic, but close enough to being viable that people will be tempted to use them or something similar.

I'm somewhat confident that McKee is mistaken in this kind of concern:
could an AI system perform surgery that initially injures humans due to cutting into the body, even if the overall goal is to help them by removing cancerous tissue?


Current AIs seem to have safely achieved the level of common sense needed to answer that question wisely.

I still have a few lingering concerns about how thoroughly it's possible to change an AIs goals from "predict the next token" to "obey Asimov's 3 Laws". Simply telling current AIs to obey the 3 Laws leads to complex interactions with other conflicting pressures on their goals. Will AI companies care enough to fully handle those conflicts? McKee doesn't tackle this question, possibly because there's not much to say without getting more technical than he wants.

The First Law

It's time to turn more of our attention to the genuinely scary parts of Asimov's First Law ("A robot must not harm a human, or allow a human to be harmed through inaction.").

Specifically, we should be aware of how that law would change the world's priorities. McKee does an almost adequate job of explaining this. I'll provide a slightly different explanation to be a bit more emphatic.

The First Law would redirect most of our resources from uses such as beer, skiing, and building fancy churches, to goals such as curing cancer and depression. Probably there would be exceptions where beer and skiing are needed to reduce depression.

What about fancy churches? AIs will probably do a better job than I can of determining whether church buildings are valuable for preventing depression and burning in hell.

I expect that in the distant future, we will look back and say that yes, the AIs were right to decide that we were negligently devoting too little effort toward finding ways to fully cure cancer.

I expect that whatever choice AIs make about reducing the risk of burning in hell, it will be the result of more careful thought than I've devoted to the subject, and still an important fraction of people will be pretty upset about the choice.

I consulted with Claude.ai about this kind of scenario, in a half-hearted attempt to empirically test how AIs understand the First Law. When I asked about the First Law abstractly, without suggesting policies, it gave abstract predictions that sounded like it expected future AIs to react as an average human would if told to obey the First Law.

I then pressed it to consider projects such as curing cancer, and talked it into this conclusion:
This would indeed look like mandatory global Effective Altruism with extremely high giving requirements - likely well over 90% of developed nations' GDP going to harm reduction projects.


Is Claude just telling me what it expects I want to hear?

The Second Law

What about Asimov's Second Law (obey orders)? Asimov says somewhat clearly that AIs need to postpone obedience until they've prevented all avoidable harm to humans. I expect AIs to take a long time before they would be allowed to take a break from tasks such as preventing cancer, depression, alien invasion, the heat death of the universe, etc. [I'm serious about those first two, and puzzled about the other two.]

The Second Law means that AIs will likely become corrigible someday. They would likely enable most of us to live long enough to experience that. But we may not have much fun in the years (millennia?) between now and then.

Is an AI that follows Asimov's Laws misaligned? Or is it thinking more clearly than any of us about how to accomplish our goals? The answer is most likely some combination of the two. The First Law underweights things like happiness and life satisfaction, but I'm somewhat tempted to accept that in return for benefits such as eliminating the risk of painful cancer.

We clearly can imagine better than that. I say it's important to make obeying orders a higher priority than avoiding harm.

The number of different forecasts people have for the results of Asimov's Laws ought to raise concerns.

McKee doesn't quite explain how misalignment implies extinction. If I were relying solely on McKee's analysis, I'd see a large risk of a future where people are safe, but lead mediocre WALL-E-like existence. I'd be confused as to why McKee and many experts were talking about extinction. This is a fairly tricky topic, and it seems mostly appropriate for McKee to punt on it. Few upcoming decisions depend on the difference between AIs treating us like pets, versus treating us as humans treated neanderthals.

Temptation

We're on track to give AI control over much of the world.

The most common reason for this will be that using AI will be addictive, like a more extreme version of social media.

Consumers haven't tried to get full control over basic tools such as phones - we give up control to companies who make them. AIs will be harder to fully understand and control.

The benefits of AI are a strong reason for some people to accept the risks of AI. E.g. even a 1% chance of AIs adding centuries to our lives can be worth a lot to someone who expect to die in a few years.

A scarier reason for giving up some control is that a military that keeps humans in the loop will react too slowly to compete with a more automated military.

McKee's analysis here isn't quite conclusive, but it's more than sufficient to create a presumption of danger.

Solutions

McKee proposes a moonshot program to "develop safe, advanced AI in this decade".

The policies he suggests sound more like they come from the Department of Homeland Security or current NASA than from the NASA of the 1960s: ways to make people think 10% more carefully before mostly proceeding on their current risky trajectory.

The least impressive of those policies is "Required labeling of AI content". That would help us detect a bunch of amateurly created deepfakes, but a misaligned ASI or a professional disinformation agency will likely flout such a law.

One of the safer policies is to impose stronger liability rules on AI companies. Note that if it slows capability advances at US companies significantly, it risks having the most powerful AIs be developed in countries with weaker legal systems. Reminder: slowing the development of unsafe AI isn't quite the same goal as creating safe AI.

The policy that seems most connected to the vision of creating a safe ASI proposes significant public funding for research in alignment-related areas.

A key difference between McKee's proposal and Project Apollo: Apollo used rocket scientists whose expertise had been proven. In contrast, we've got lots of people who think they're experts at fixing Asimov's Laws, but nothing remotely resembling agreement as to which fixes are most promising. McKee doesn't present a plan for deciding who to trust on this. My intuition tells me that researchers have found a few strategies that will keep us safe, but their arguments are weak enough that most other researchers reject the strategies. Finding more hard-to-evaluate strategies is better than nothing, but it doesn't sound like how Project Apollo was run.

I wish I could condemn the section on solutions as a serious flaw with the book. Unfortunately, it closer than I'd like to being an accurate portrayal of the leading safety plans.

One final note: McKee lists 80000hours.org as a source for AI safety career information. They looked respectable at the time the book was published, but their reliability is now considered controversial.

Uncontrollable prompted me to rethink the problems with alignment in light of recent evidence about how AIs will work. My estimated probability of human extinction this century has dropped from 15% to 12%, but my probability of a safe but disappointing future has increased from to 10%.

I probably sounded confident about some of my claims here. Please remember that we don't know enough for any of us to have much confidence on these topics. One of the few things I'm confident about is that we live in interesting times.
Profile Image for Bart Jaworski.
25 reviews2 followers
December 23, 2024
It's a very up-to-date and pragmatic overview of the contemporary issues we face as we sit on the exponential curve of technological progress.
Profile Image for Simon.
35 reviews4 followers
December 22, 2023
"AI will probably, most likely lead to the end of the world, but in the meantime, there'll be great companies." - Sam Altman of OpenAI

As we approach 2024, AI capabilities continue advancing rapidly, with models like GPT-4 demonstrating eerily human-like language proficiency. In his prescient new book Uncontrollable Darren McKee makes a compelling case that artificial general intelligence (AGI) may arrive sooner than we realize, bringing with it the potential for uncontrollable artificial superintelligence (ASI).

McKee explains complex AI concepts like transformers and foundation models in straightforward terms. These architectures, combined with massive datasets and compute power, have enabled explosive progress in models like GPT-4. However, as the author describes, today's systems still lack general reasoning, common sense, and transfer learning abilities.

Hardware improvements like faster processors and parallel exponential computing enable quicker training of AI models. This allows more iterations and experiments.

Larger labeled datasets improve performance, and techniques like self-supervised learning create huge unlabeled datasets.

Algorithms and architectures like transformers and foundation models prove very effective at many AI tasks.

Commercial interests are pouring resources into AI research. Competition drives progress.

Once AGI is created, an intelligence explosion could quickly follow, rapidly yielding ASI surpassing human-level cognitive abilities. But aligning the values and goals of such an ASI with human preferences appears extremely challenging. For example, an ASI tasked with maximally efficient paperclip manufacturing could logically conclude that converting all matter on Earth into paperclips is the best solution. This illustrates the difficulty of specifying complete, coherent, and robust goals for an advanced AI system.

Researchers have been exploring solutions to the alignment problem for decades, with foundational work by figures like I.J. Good and Irving John Good in the 1960s, and Nick Bostrom's influential writings in the early 2000s.
But as McKee convincingly argues, proposed techniques like capability control methods, utility functions, and corrigibility schemas all have limitations. There are no easy answers, and solving alignment in a way that accounts for the fluidity and generality of human values remains an open problem.

McKee makes a sober, well-reasoned case that developing safe AGI is crucial and urgent. I highly recommend this thoughtful book to anyone concerned about humanity's future in an age of accelerating AI capabilities.
Profile Image for Jordan.
98 reviews10 followers
February 6, 2024
Whether it goes fantastically or horribly, advanced AI will probably be the single most transformative development in the history of humanity.

Despite the cover, this is a sober and current assessment of the coming risks from advanced artificial intelligence. McKee discusses the power of intelligence, different categories of AI and the implications of deployment. In short, he argues that artificial superintelligence (ASI) is likely to arrive within two decades due to exponential progress. The plausibility of that scenario implies that we should prepare for that scenario, rather than hoping that we get lucky with a longer timeline. ASI would be extremely powerful, the way that we are extremely powerful compared to less intelligent mice. We don't know how to align it with our preferences - present attempts, as documented by brian christian, have revealed that this is an extremely difficult problem. We probably cannot control it if it is unaligned. McKee finishes appropriately with an optimistic call to action, with principles for safe development, international treaties and individual pathways.

Uncontrollable is remarkably focused and accessible. Compared to the other excellent books on this topic such as Human Compatible: Artificial Intelligence and the Problem of Control, The Alignment Problem: Machine Learning and Human Values and Superintelligence: Paths, Dangers, Strategies, McKee is at the top of my recommendations.
27 reviews8 followers
November 11, 2024
This book present a very accessible yet comprehensive version of the reasons why we ought to be worried about the future of AI. Not because it is definitely going to turn rogue on us and kill all humans, but because of a general trend of quickly increasing capabilities combined with the large uncertainty around the precise form these capabilities will take.

The book reads very smoothly and has a conversational tone with an occasional light touch of humor. The pacing is a little uneven at times (the first few chapters were rather slow while chapters 8 and 9 were very dense) and the reasoning tends to rely more on expert judgement than first-principles reasoning (which is an understandable choice given the goal of keeping things accessible), but in the end these are only minor squabbles and don't distract from the main message.

Perhaps you don't believe that existential risks from advanced AI are real, or that people in the field are just using it as a distraction from more mundane problems that are being caused by AI right now. But whether or not you agree with the ultimate conclusion of the book, I believe it is worth reading for getting a solid and comprehensive overview of why many people in the field *are* worried.
Profile Image for William Kiely.
23 reviews4 followers
November 30, 2024
If you only read one book about AI, make it this one.

The world is not yet as concerned as it should be about the impending development of superintelligent machines. Most people are not paying enough attention.

What one book should most people read to become informed and start to remedy this situation?

"Uncontrollable" by Darren McKee is my number one recommendation, ahead of:

- "Superintelligence" by Nick Bostrom,
- "Human Compatible" by Stuart Russell, and
- "The Alignment Problem" by Brian Christian

It's a short, easy read (6 hours at 120wpm / 2x speed om Audible) covering all of the most important topics related to AI, from what's happening in the world of AI, to what risks from AI humanity faces in the near future, to what each and everyone one of us can do to help with the most important problem of our time.
Profile Image for David W. W..
Author 13 books44 followers
December 31, 2023
Recommended!

I've just finished listening to this book. It has become my new top recommendation for people looking for a clear, respectful, comprehensive analysis of the risks and issues associated with Artificial Superintelligence.

It gets its top marks from me for:
*) Explaining terms and concepts clearly and accessibly as it progresses
*) Straightforwardly refuting many examples of the wishful but dangerous thinking that surround this field
*) Being remarkably up-to-date
*) Providing good reasons for hope as well as for being concerned
*) Setting out a programme of practical steps forward.

I strongly recommend it, even to people like me who already think they know plenty about AI :-)
Profile Image for Drew Spartz.
2 reviews3 followers
November 22, 2023
My friends keep asking me - how do I not get left behind by AI? I now finally have a place to send them. This book is a masterpiece at explaining the high-level trends in AI that will be robust five or even ten years from now.

People consistently miss exponential trends. But if you look at them from first principles, you'll see why. The markets have not priced in AGI or potential superintelligence.

Humanity as a whole is basically in February 2020 when it comes to AI, and Mckee's book is a good step toward actually understanding what's coming next.
Profile Image for inka.
8 reviews
April 23, 2025
Somewhat complex arguments were clearly explained and I understand the argument for X-risk of AI a bit better, but when trying to evoke emotion and understanding towards the uncertainty and possible scale of threat, it often took away from the credibility, i.e. "if you had a 10% chance of your phone blowing up...".

After reading I'm more open to the argument and the likelihood of me reading something more on the matter if it came my way increased.
Profile Image for Jari Pirhonen.
440 reviews13 followers
May 20, 2024
Good arguments of possible artificial superintelligence (ASI) threats. The author compares ASI to nuclear weapons as a worst-case scenario. The main message is that although the probability of extreme ASI threats is low, the potential consequences for humanity could be disastrous. Therefore, we need to prepare for the worst now.
Profile Image for Olivia.
89 reviews2 followers
April 5, 2025
This book is about nothing else than that artificial superintelligence poses a risk as high as nuclear weapons, it can cause it in a few different ways but we dont actually know how and we dont know how we can stop it fully
Profile Image for Adam Kruger.
7 reviews39 followers
April 16, 2025
I thought this was a great overview of the existential risks posed by AI as well as various mitigation strategies. After reading this book, I’m curious about (1) how we can unite countries to prevent a race to the bottom and (2) whether we need to cap profits from AI for for-profit companies like OpenAI, Google, Microsoft, etc. (1) and (2) feel to me like two of the most important ways to ensure AI goes well.
1 review2 followers
November 26, 2023
Easy read about a hard topic.

Highly recommend it as an introduction to the most important problem of our times.
1 review
June 25, 2024
Probably the best intro to the most important problem in history: ensuring that AI systems continue to benefit humanity as they become superintelligent.
Displaying 1 - 23 of 23 reviews

Can't find what you're looking for?

Get help and learn more about the design.