Thread by William Eden
- Tweet
- Feb 28, 2023
- #ArtificialIntelligence
Thread
My Twitter timeline is full of panicked takes about imminent AI apocalypse and certain doom.
I think this is starting to get overplayed, and so I want to make a long thread about why I'm personally not worried yet.
Get ready for a big one... 1/n
I think this is starting to get overplayed, and so I want to make a long thread about why I'm personally not worried yet.
Get ready for a big one... 1/n
(For folks who don't know me, I've been around for a while. Before Less Wrong existed, I found Overcoming Bias around 2007. Tbf many of the core people were in SL4, the Extropian list, etc, so I've thought about this a *lot* but not as much as some others in the space) 2/n
I am composing these tweets as I go so I will likely add more, but here are some propositions:
-I think we will have another AI winter
-I think slow takeoff is much more likely
-I think there are weird social incentives at play
-I think panic isn't helpful 3/n
-I think we will have another AI winter
-I think slow takeoff is much more likely
-I think there are weird social incentives at play
-I think panic isn't helpful 3/n
(Also there is a better version of this thread with links and citations and nicely presented evidence, but given the insane amount of discourse happening right now I'm opting for the quick thread I can write, not the essay I won't for a while) 4/n
AI winter? At this hour??
Perhaps I'm just terminally contrarian, since it seems like no one agrees with me, but I'm seeing some things that make me think we are in a classic bubble scenario, and lots of trends that can't clearly continue 5/n
Perhaps I'm just terminally contrarian, since it seems like no one agrees with me, but I'm seeing some things that make me think we are in a classic bubble scenario, and lots of trends that can't clearly continue 5/n
One of the main factions right now thinks that scaling is everything, we have the basic tools, and with more compute we hit superhuman performance. If we take that as true, do we even have enough compute? 6/n
Moore's law has been looking weaker and weaker. Clock speeds paused a long time ago. Die size is hitting physical limits. Cost per compute is still falling but it lost the exponential it was on. Without some major changes (new architecture/paradigm?) this looks played out 7/n
Existing AI applications are already taking a significant fraction of global compute power, I find it implausible that global AI efforts could scale more than another 2 orders of magnitude, and that crowds out a ton of other compute to do so. Unlikely IMO 8/n
Does another 2 OOM increase in compute with current techniques get us to AGI? Personally I am skeptical, but this is an empirical question, and we should keep an eye on how these scaling curves continue to develop in the next iteration of models 9/n
In addition to physical limits on Moore's Law, I will note that the semiconductor industry is highly centralized, with ASML being the only company making cutting edge fabs, and 1-3 companies variously on the cutting edge of chip manufacture, the major one being in Taiwan... 10/n
The semi industry faces extremely high geopolitical risk, and supply chain risk, and in the event of a great power conflict will almost certainly be one of the major targets. It's unclear to me further breakthroughs or even current production of chips will persist 11/n
Setting aside the hardware required, there's the question of economics. Are current models cost-effective enough to replace other methods, and will they continue to improve enough? 12/n
The All-In podcast folks estimated a ChatGPT query as being about 10x more expensive than a Google search. I've talked to analysts who carefully estimated more like 3-5x. In a business like search, something like a 10% improvement is a killer app. 3-5x is not in the running! 13/n
Tbf these models don't have to dominate in every area, maybe search is just a tough example. Copilot seems to be very helpful, for example. This is another empirical question - what profit centers can be created with current and next-gen models? 14/n
I think this will be THE single key behind whether we get another AI winter: will models be profitable.
Tech demonstrations are flashy, but the people building the best systems are all trying to make money, and will pull the plug if that doesn't happen. 15/n
Tech demonstrations are flashy, but the people building the best systems are all trying to make money, and will pull the plug if that doesn't happen. 15/n
My current read is that AI models are great at automating low cost/risk/importance work, and extending the extensive margin of new work - this alone won't quite get us there 16/n
Example: someone writes a blog post, and wants an accompanying illustration. DALL-E can provide one basically for free and do a passable job. Would they have hired an artist to make one? Possibly, but unlikely. So it's new, real value, but low value and not captured 17/n
Models like LLMs suffer from low reliability and low accountability, both of which are absolutely critical in major sectors of the economy. Would you let an LLM drive your car, if the error rate is 1%, or even 0.1%? 18/n
Self-driving cars are a great example. The first test demonstrations were decades ago now, and have always seemed just over the horizon. It turns out you need *extreme* levels of reliability! One Weird Trick doesn't get you there 19/n
So the question is, which areas of the economy can deal with 99% correct solutions? My answer is: ones that don't create/capture most of the value. You can use an LLM to translate with a friend across the world, but you need a professional to write an airtight legal contract 20/n
To go back to self-driving cars, it turns out that the world is exceptionally complex. Toy problems like solving Go are cool, producing humanlike speech via LLMs is very cool, but this is far from grappling with the open world that is reality 21/n
It's hard to overstate the difference between solving toy problems like keeping a car between some cones on an open desert, and having a car deal with unspecified situations involving many other agents and uncertain info navigating a busy city street 22/n
This is IMO one of the most profoundly important things most people overlook. Getting Things Done IRL is Very Hard, Harder Than You Think. AI is going to struggle a lot more before it gets there. 23/n
Now take all of the above, and compare that to the insane fever pitch of the hype around AI, and the exponentially increasing dollars and compute being thrown at the problem. I could turn out very wrong, but this looks like a classic bubble 24/n
We see the typical explosive moves in equity markets for anything AI-related, we see earnings calls devolving into asking unrelated companies what their plan is for AI, the only area of VC still thriving is investing in AI companies, etc etc 25/n
All of this hype and investment may well turn out to be correct, some things are indeed world-changing forever, though even those technologies accrue value in places people often don't expect 26/n
So all of the above leads me to my prediction: AI is currently overhyped. There will be massive overinvestment, which will not be met with profitability, and it will only take a few years for corporate types and investors to get burned and pull back on compute spend 27/n
Alright, only one major point covered and already 28 tweets in, I see where this is heading 😅 28/n
My second major topic: I believe that slow takeoff is much more likely than hard takeoff, and that slow takeoff will be significantly less risky and have more chances for win scenarios before the future is out of our hands 29/n
For one thing, this world looks exactly like a slow takeoff world. AI models are powerful but flawed, piecemeal, not generalizable outside their domains. None of them have "closed the loop" on self-improvement, and IMO none have that potential yet 30/n
The world being highly complex and difficult is part of the reason here. Humans are a bundle of many different functionalities, all of which are important, and we don't function well in the world missing even a single piece of the puzzle 31/n
FWIW I do personally have a "critical path" in mind that leads to more powerful agents, and I keep an active watch on new developments, and I'm only seeing some but not all pieces of that path coming together, at least for now 32/n
I'm also not ruling out the possibility of eventually reaching very strong agents - it's a near certainty! But the path that gets there from here involves a huge amount of tinkering, trial and error, combining techniques, and all of this is hard and takes lots of time 33/n
Given my view of the world, a slow takeoff will involve a lot of feedback as we build increasingly capable agents, which is going to be super valuable in producing a good outcome 34/n
For one thing, we will undoubtedly make costly mistakes, but generally these mistakes won't be existential in nature. Even a devastating eg financial market crash is recoverable, and we will learn something from the experience 35/n
(To be fair, I think the COVID experience is the best evidence against this point. It seems at times like we learned absolutely nothing and made nothing better. Perhaps that's too cynical, but I have to at least point it out as a grim possibility.) 36/n
Even more importantly though, I think the slow takeoff scenario will give us lots of time to actually grapple with hard problems like alignment, using systems that are increasingly powerful and closer to the goal 37/n
Taking an outside view on technological progress, it's the very act of creating something and tinkering with it that provides us with the most insight. 38/n
Even the core people working on the alignment problem admit they don't know how to solve it. Sitting and thinking hasn't yet done it. Getting our hands on working systems and trying practical alignment on sub-AGI problems could be exactly what we need to make progress! 39/n
All of this should come with the caveat that I don't think hard takeoff is impossible. There could be an algorithmic bolt from the blue, and suddenly a desktop computer can house an AGI. It's hard to completely rule that out 40/n
But given what I said above about the full complexity of the real world, and observed constraints on compute, I would personally be quite surprised if the instant-AGI-death scenario is extremely likely 41/n
My personal guess, and I admit this is a complete guess, is that even if we put all the puzzle pieces together correctly we probably have months-years of the AGI spinning up to absolute superhuman dominance 42/n
The best outside view argument for hard takeoff, IMO, is that the current AI gold rush is going to cause so much talent to enter the field, that the sheer amount of great people working on the problem will increase the odds of an algorithmic breakthrough 43/n
This seems like a reasonable argument to me, though I think not sufficient to try to halt all progress on current AI systems. There are also serious risks to *not* developing more technology, and I think we need to balance those to maximize our odds of survival 44/n
Switching gears a bit, another reason I'm hesitant around these AI doom scenarios is that I think there are weird social things going on, which make it harder for everyone to think clearly and make me more suspicious of any strong conclusions in general 45/n
I feel a little weird talking about this, because it isn't really a hard argument like the above sections, and it basically casts doubt on the rationality of many people I know, but it does feel like an important part of my model and the stakes seem high, so here goes... 46/n
For one thing, this belief in imminent AGI seemed to spread like a social contagion, with the epicenter beginning as a media/PR blitz by DeepMind. This is great for them as a company, but there's no reason to trust corporate PR in particular 47/n
Before the AlphaGo-Lee Sedol match, I had a bet with Eliezer that AlphaGo would be the winner. He took the other side! Think about how different that world used to be! Even the most hardcore safety types were skeptical about progress in AI 48/n
Furthermore, the actual result itself was what EY considered the most unlikely: that the game would be split. His own theory was that either human or AI would sweep the entire series, because the human-superhuman divide is so tiny. This should be evidence against this idea! 49/n
And then the phase transition happened - AI safety people flipped to AGI-imminent, on the backs of DeepMind and related parties directly working on AGI. When they previously had been quite skeptical in our last conversations! 50/n
Of course, I'm not saying people shouldn't update. And it's entirely possible that appropriate updates do look like "social contagion" models. This is part of why this section is so tricky. 51/n
So I spent some time talking to machine learning folks working in the trenches, building the actual systems going into deployment, and at that point basically none of them thought that AGI was imminent or suddenly more likely. 52/n
As I said above, I put a lot of stock into tight feedback loops, the people building are often the ones with the clearest picture, so these conversations did impact me. And again, they could be totally wrong, but it was adding to my suspicion that something was going weird 53/n
There is an undeniable social incentive for AGI people to hype AGI for more investment, and for safety people to hype danger for more investment. And the arguments themselves are perfect: if hard takeoff is instant death it's crazy not to stop or slow AI progress somewhat! 54/n
This is not even to say that the argument is wrong - just that it distorts the ability to think and converse about the subject. And that makes me across the board a lot more hesitant to endorse any really strong positions in this space, and try to use outside view more 55/n
Especially now that everyone (in my circles and on Twitter) has turned AGI-imminent and certain-doom, it's pretty socially costly to say that maybe AGI is still further away and maybe we won't all die and maybe it's okay we develop AI a little further and see what we learn 56/n
Honestly part of me dreads even writing this thread, and I may or may not reply to comments on it, we'll see. But it does feel right now like we are in an availability cascade and I feel the need to stand up and say "hold on" 57/n
On my final major point, to 99% of people reading this: I highly recommend trying NOT to think more about AI. I don't think it will do you, or the world, much good. (This doesn't apply to AI researchers etc, who are a tiny fraction of the population) 58/n
One idea from stoicism is that if something is not under your control, you should try not to feel strongly about it one way or the other. It is going to happen, and you can't change it. Easier said than done, of course, but I find this perspective helpful 59/n
I think panicking and then changing all of your life plans is highly unlikely to improve your situation or odds of survival, if we are in either the good or bad state of the world 60/n
There is also something to be said for hedging humanity's outcomes. Maybe I am completely wrong and AGI is impossible for some reason. We still need people doing normal people things. We can't *all* go millenarian death cult because of these beliefs 61/n
Personally: I'm still having children. I expect to see them grow up to be adults. I look forward to many years spent with them. I'm saving money for the future. Implicitly and explicitly, I think we have several decades left before the future is out of our hands 62/n
Life is a marathon, not a sprint. (And if it becomes a sprint, you'll definitely notice - it's not this.) Take a deep breath. If you're stressing out about AI, take a step back. Or write some predictions down, come back a year later and see where we are at 63/n
There are other, more esoteric coping strategies, like anthropics, quantum immortality, the simulation hypothesis, etc. My recommendation is just to not go down that path at all, unless you already set foot on it and can't turn back 64/n
FWIW I think there are many more interesting discussions and arguments to be had in this space, but I wanted to try focusing this thread on some higher level outside view considerations that hopefully make sense with less background knowledge in decision theory etc 65/n
Thanks everyone for making it this far.
Send tweet.
Godspeed.
66/66
Send tweet.
Godspeed.
66/66