There was a time when I was slightly embarrassed to bring up concerns about Artificial Intelligence to friends and family who weren’t already AI safety-pilled. It’s all a bit Terminator, isn’t it? It doesn’t really feel like a serious concern. Or at least, it didn’t at the time.
Things have been different since ChatGPT came out. In July of this year, the Daily Star ran with the headline ‘Psycho robot scumbags: we promise NOT to kill off humans’, plus the appropriately sarky comment that the promise ‘sounds 100% legit’.
Admittedly, this sounds like a bit of a joke. But still, AI safety concerns on the front cover of a national newspaper? Even with this being the Daily Star, we can probably assume that quite a few people are genuinely at least slightly worried about the aforementioned psycho robot scumbags.
And the polling supports this. The number of people who see AI as the top threat to human survival has increased from 7% in February 2022 to 17% in May 2023. To put that number into context, 17% is the same percentage as the number who think ‘the bees dying out’ is likely to lead to human extinction. But still, a 10-point change over the course of a year is nothing to scoff at, and 17% of people thinking Artificial Intelligence seriously poses a risk of human extinction means it makes the top five (tied with the bees).
The British public loves banning things. Or, if not banning, regulating the hell out of things so that even if they’re not technically banned, there’s still much less incentive to sell them than there otherwise would be.
The vast majority of the British public want to ban the American Bully XL. They overwhelmingly want to ban Muslim women from wearing the burqa. 67% of Londoners want to ban wood burners. Gambling adverts, cigarettes, and fur imports are all for the chop. (I make no comment on which of these bans I support, although I will say that the answer is not ‘none of them’).
So, what about AI? I don’t think I can find any polling on whether the public thinks AI should be banned completely. But on basically every question where there is a mention of a ban of some sort, the public seems supportive. Rethink Priorities has done some polling in the US, finding that 51% of the population would support a pause on AI research, while only 25% would oppose a pause. When it comes to regulating AI in a manner akin to FDA regulations, 70% are supportive, whereas only 21% are opposed.
The majority of people in the UK think governments should try to prevent AI from taking human jobs. 67% of people think that it’s more important that kids learn to do things without the help of AI, rather than how to use AI. 63% believe AI should be banned during exams (although only 41% think it should be banned for homework). Only 20% of people think that AI companies should be allowed to train their models on any publicly available text or images.
Their predictions about when we’re likely to actually arrive at Artificial General Intelligence are surprisingly reasonable, with most respondents thinking that AGI will be here between 2030 and 2039. (Although, the definition of AGI in the polling question, ‘a robot that is as smart as a human’, isn’t as precise as I’d like it to be).1
If you think that we should slow down AI progress, many people are on your side. Compare the increased fear and awareness around AI to the absolute slog to get people to care about climate change. In 2006, the South Park episode ManBearPig mocked Al Gore for making a documentary on global warming, comparing his worries to concerns about a hypothetical ‘half-man, half-bear, and half-pig’. In 2018, they finally admitted he may have had a point by releasing an episode where it’s revealed that ManBearPig does in fact exist and everyone is pretty upset that they made fun of Al Gore for so long.
Perhaps surprisingly, we may be in a situation where people are quicker to believe that Terminator-esque killer robots are a threat than they were to believe that increasing global temperatures are.
I think there are a few implications here. If you support AI regulation, you should hope that we don’t screw this up by polarising this issue along party lines. In 2017, a load of scientists marched in Washington D.C., as part of the ‘March for Science’. While the protestors claimed to be apolitical experts advocating for ‘science-informed public policies around the world’, observers (probably accurately) thought of the march as a liberal protest against the policies of Donald Trump, with an emphasis on climate policy. What was the result? Using a quasi-experimental design, Motta (2018) found that liberals became more trusting of scientists, and conservatives became less trusting. Not too unexpected.
Looking at the noisy crosstabs of the YouGov opinion polling in the UK, concerns about AI risk are not polarised along party lines. Conservative supporters, Labour supporters, and Liberal Democrat supporters all agree that AI risk could be a big deal. The story is the same in the US: YouGov found that most Americans support AI regulation, and Republicans are actually more likely to support heavy regulation than Democrats.
If you are someone who takes the Yudkowsky position that we basically need to shut all this stuff down, or you’re just someone who doesn’t fall into the ‘No Pause’ group outlined in this piece, this should all come as good news to you. People already agree with you on the position, now you just have to go and do stuff to get the salience up! 17% of people believe that AI is an existential threat, but the ceiling on that number is likely very high. If you could reasonably be considered an AI safety expert, go and pitch opinion pieces to newspapers when there are big news stories about AI. If not, hand out copies of Superintelligence at King’s Cross station or something. Pump those numbers up.
These figures here and in the previous paragraph are all taken from link in the paragraph above.
Ohh great, this feels like the best way to make sure that the very worst kind of AI is built.
I don't agree with Yudkowsky et al on the risk for a number of reasons I won't go into here, but for that risk it's still probably worse if we get military AI but no civilian development in the open by people concerned with safety. And the problem is the kind of support you cite doesn't seem like the kind of really robust conviction that's going to demand the kind of laws that limit even NSA programs. That's likely to be the kind of support that imposes some vague limitation and then as long as they don't see it doesn't care.