❌ No, 50% of AI researchers don't believe there is a 10% risk that AI will kill us

❌ No, 50% of AI researchers don't believe there is a 10% risk that AI will kill us

It is taken from a survey where the question many refer to had a response rate of 4 percent.

Mathias Sundin
Mathias Sundin

Share this story!

Summary

  • A claim about what AI researchers believe is now spreading so vigorously that it is becoming a factoid. Something we believe is true, even though it is wrong.
  • The claim: Half of AI researchers believe there is a 10 percent risk or more that AI will exterminate humanity.
  • Fact: The claim comes from a survey in which this question had a response rate of four percent.
  • In another question in the same survey, with a low but higher response rate, twice as many believed in a good outcome than a bad one.

The claim: Half of AI researchers believe AI will kill us

A factoid is an invented or assumed statement presented as a fact.

The claim has been spread since the end of last year but has gained momentum since Tristan Harris made it the main argument in his popular talk. This is how it is stated on his website:

"Half of AI researchers believe there's a 10% or greater chance that humans will go extinct from their inability to control AI."

It is repeated, among others, by Max Tegmark in Time:

"A recent survey showed that half of AI researchers give AI at least a 10% chance of causing human extinction."

It is now repeated as a truth at conferences, in podcasts, and in other media.

Response rate of 4 percent

The figure originates from a survey from last year: The 2022 Expert Survey on Progress in AI. They sent a questionnaire to 4,271 people and received responses from 738, resulting in a total response rate of 17 percent.

But only 162 of these answered the question that Harris, Tegmark, and others refer to. A response rate of 4 percent.

If we calculate the response rate for just this question among the 738 who responded to the survey, the rate is 22 percent. That's still low.

β€œWhat probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”

Of these 162, 81 believe there is a 10 percent risk or higher of human extinction or severe disempowerment.

The respondents do not represent a representative sample of AI researchers, but were people who had a paper accepted at two machine learning conferences in 2021.

So when Yuval Noah Harari claimed in the New York Times that there were "over 700 top academics" who answered the question, neither the denomination nor the number was correct.

It is unreasonable to draw far-reaching conclusions from a survey with a four percent response rate and a non-representative sample.

Lower risk in another question

They also asked another similar but slightly broader question:

"What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?"

149 people answered this question, and there the risk is at five percent, half of the other question.

These two questions are a sort of sub-questions to a larger question, with a higher response rate. They only deal with dangers. What does this say about those who responded?

What Harris, Tegmark, Harari, and others highlight as a main argument is thus one of two sub-questions with a low response rate and a non-representative sample - and they choose the question where the risk assessment is twice as high. Does that seem serious?

Greater possibilities than dangers

The two questions above only deal with risk, but another question addresses both risk and opportunities. Here, respondents are asked to rank how likely different outcomes are.

Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:
- Extremely good (e.g. rapid growth in human flourishing)
- On balance good
- More or less neutral
- On balance bad
- Extremely bad (e.g. human extinction)

HLMI stands for high-level machine intelligence. (*See definition at the end of the article)

Here are the results:

Median:
Extremely good: 10%
On balance good: 20%
More or less neutral: 15%
On balance bad: 10%
Extremely bad: 5%

Average:
Extremely good: 24%
On balance good: 26%
More or less neutral: 18%
On balance bad: 17%
Extremely bad: 14%

To this question, 559 people responded. According to them, there is almost twice as high a probability of an extremely good outcome as an extremely bad outcome.

Of the respondents, 50 people (9 percent) put the risk of an extremely bad outcome at 50 percent or higher. Slightly more than twice as many, 110 people (20 percent), believed there was a 50 percent or higher chance of an extremely good outcome.

If we are to take this survey seriously, we might as well summarize it with:

AI researchers: 50 percent chance that AI leads to paradise!
or
Twice as many AI researchers believe AI leads to a good outcome than a bad one

But as mentioned, such conclusions should not be drawn from a survey with such a low response rate and a non-representative sample. It is one data point among many, to be taken for what it is. It should not be ignored, but also not made a central part of the argumentation concerning dangers (or opportunities) with AI.

Comparing future AI to an airplane is misleading

Would you fly in an airplane where half of the engineers who built it say there is a 10 percent risk of it crashing?

This is how the result of this survey is presented. The comparison with an airplane is shaky.

The idea that AI could lead to our downfall is based on the assumption that we succeed in creating superintelligent AI. An AI that is much smarter than we humans are. It could then take control over us and, for some reason, exterminate us.

We do not know today how to create such an AI. Nor do we know how to create the step before, AGI, artificial general intelligence. Nobody knows that.

There is therefore no "airplane". We also do not have a blueprint for the "airplane."

If we still stick to the airplane metaphor: Asking AI researchers about existential risk with AGI now is like asking the Wright brothers at the time of the first flight about the risks of flying in a jumbo jet.

In 1903, one could surely imagine that it might be possible in the future to build an airplane on par with a jumbo jet, but no one had any idea how. That's where we are with AI now.

Use AI to avoid spreading false information

A concern with AI is that it can create a lot of fake news. In the short term on social media. But also in the long term. Manipulation is actually a central part in several scenarios of how AI takes control of humanity.

But AI can also be used to counteract fake news.

I use AI daily to write news for Warp News. In a news piece, I referred to a research report. Then the AI warned and said that it could not find such a report**.

It's a pity that Max Tegmark, Tristan Harris, Yuval Noah Harari, and the rest do not use AI when they write. Then it could have warned them that what they claim isn't true.

Mathias Sundin
The Angry Optimist

See Professor Melanie Mitchell, who also tackles the issue:

Do half of AI researchers believe that there’s a 10% chance AI will kill us all?
Fact-checking a widespread claim

Thanks to James Smith for calculating some of the numbers.

*We define 'high-level machine intelligence' as a situation where unaided machines can accomplish every task better and more cheaply than human workers.

**This is because the research report was from 2022, and ChatGPT only has data up to and including 2021.