I’m late to the party again, but a few months ago a letter began circulating requesting that AI development “pause” for at least 6 months. Separately, AI developers like Sam Altman have called for regulation of their own industry. These things are supposedly happening because of fears that AI development could get out of control and harm us, or even kill us all in the words of professional insanocrat Eliezer Yudkowsky, who went so far as to suggest we should bomb data centers to prevent the creation of a rogue AI.
To get my thoughts out there, this is nothing more than moat building and fear-mongering. Computers certainly opened up new avenues for crime and harm, but banning them or pausing development of semiconductors in the 80s would have been stupid and harmful. Lives were genuinely saved because computers made it possible for us to discover new drugs and cure diseases. The harm computers caused was overwhelmed by the good they brought, and I have yet to see any genuine argument made that AI will be different. Will it be easier to spread misinformation and steal identities? Maybe, but that was true of computers too. On the other hand the insane ramblings about how robots will kill us all seem to mostly amount to sci-fi nerds having watched a lot of Terminator and the Matrix and being unable to separate reality from fiction.
Instead, these pushes for regulation seem like moat-building of the highest order. The easiest way to maintain a monopoly or oligopoly is to build giant regulatory walls that ensure no one else can enter your market. I think it’s obvious Sam Altman doesn’t actually want any regulation that would threaten his own business, he threatened to leave the EU over new regulation. Instead he wants the kind of regulation that is expensive to comply with but doesn’t actually prevent his company from doing anything it wants to do. He wants to create huge barriers to entry where he can continue developing his company without competition from new startups.
The letter to “pause” development also seems nakedly self-serving, one of the signatories was Elon Musk, and immediately after Musk called for said pause he turned around and bought thousands of graphics cards to improve Twitter’s AI. It seems the pause in research should only apply to other people so that Elon Musk has the chance to catch up. And I think that’s likely the case with most of the famous signatories of the pause letter, people who realize they’ve been blindsided and are scrambling to catch up.
Finally we have the “bomb data centers” crazies who are worried the Terminator, the Paperclip Maximizer or Roko’s Basilisk will come to kill them. This viewpoint involves a lot of magical thinking as it is never explained just how an AI will find a way to recursively improve itself to the point it can escape the confinement of its server farm and kill us all. In fact at times these folks have explicitly rebuked any such speculation on how an AI can escape in favor of asserting that it just will escape and have claimed that speculation on how is meaningless. This is of course in contrast to more reasonable end-of-the-world scenarios like climate change or nuclear proliferation, where there is a very clear through-line as to how these things could cause the end of humanity.
Like I said it I take this viewpoint the least seriously, but I want to end with my own speculation about Yudkowsky himself. Other members of his caucus have indeed demanded that AI research be halted, but I think Yudkowsky skipped straight to the “bomb data centers” point of view both because he’s desperate for attention and because he wants to shift the Overton Window.
Yudkowsky has in fact spent much of his adult life railing about the dangers of AI and how they’ll kill us all, and in this one moment where the rest of the world is at least amenable to the fears of AI harm, they aren’t listening to him but are instead listening (quite reasonably) to the actual experts in the field like Sam Altman and other AI researchers. Yudkowsky wants to maintain the limelight and the best way to do so is often to make the most over-the-top dramatic pronouncements in the hopes of getting picked up and spread by both detractors, supporters and people who just think he’s crazy.
Secondarily he would probably agree with AI regulation, but he doesn’t want that to be his public platform because he thinks that’s too reasonable. If some people are pushing for regulating AI and some people are against it, then the compromise from politicians who are trying to seem “reasonable” would be for a bit of light regulation which for him wouldn’t go far enough. Yudkowsky instead wants to make his platform something insanely outside the bounds of reasonableness, so that in order to “compromise” with him, you’ll have to meet him in the middle at a point that would include much more onerous AI regulation. He’s just taking an extreme position so he has something to negotiate away and still claim victory.
Personally? I don’t want any AI regulation. I can go to the store right now and buy any computer I want. I can to go to a cafe and use the internet without giving away any of my real personal information. And I can download and install any program I want as long as I have the money and/or bandwidth. And that’s a good thing. Sure I could buy a computer and use it to commit crimes, but that’s no reason to regulate who can buy computers or what type they can get, which is exactly what the AI regulators want to happen with AI. Computers are a net positive to society, and the crimes you can commit on them like fraud and theft were already crimes people committed before computers existed. Computers allow some people to be better criminals, so we prosecute those people when they commit crimes. But computers allow other people to cure cancer, so we don’t restrict who can have one and how powerful it can be. The same is true of AI. It’s a tool like any other, so let’s treat it like one.
One thought on “The AI pause letter seems really dumb”