Share this story!
Max Tegmark (and many others) refer to a survey with a four percent response rate and claims that "half of AI researchers" believe in a ten percent risk of humanity's extinction.
Maria Gunther was tough in her judgment of Tegmark:
"If one uses such made-up facts to argue their case, one not only sabotages the whole discussion. One also undermines one's own credibility as a researcher and debater."
Max Tegmark has now responded to the criticism - and it is an... interesting response.
I was mainly criticized for three things:
1. The study was so small that the statistical margin of error was 4 percent (50 percent divided by the square root of 162 for readers who like statistics), so I should have written "50 percent ± 4 percent" instead of "half". But DN also does not usually report the margin of error (often over one percent) when they publish opinion polls.
No, the criticism was not that it could have been 54 or 46 percent. The criticism was that he took a survey with a four percent response rate and claimed that it represented "half of AI researchers".
2. Some of the researchers were students and not senior researchers. I regard this criticism as elitist snobbery: everyone had managed to get their AI research published in fierce competition at two of the world's most prestigious conferences.
If one claims that AI researchers think something, it is not snobbery to point out that among the respondents, there may be students. There's nothing wrong with students, their opinion can be interesting, but they are not AI researchers!
3. It is possible that worried researchers were more inclined to participate in the survey. Sure. But the opposite is also possible. Yet DN hypocritically chose to put the assertive title "No, every other AI researcher does NOT believe there is a 10 percent risk that humanity dies", for which there is then no evidence: it may as well be more as less.
The headline refers to the survey and is not a general statement, but indeed Tegmark has a point that both DN's and Warp News' headlines could have been more nuanced.
However, he shoots himself in the foot in the last sentence: "...it may as well be more as less." Yes, exactly... we do not know, because the survey Tegmark refers to cannot serve as a basis for stating how many AI researchers think something on this matter.
In the rest of the article, Tegmark writes that "many" certainly think AI is dangerous. True, and there can be more – or less – than half of them who believe in humanity's extinction. That is not what this is about. This is about a false argument being put forward and made a central point in the AI debate.
I have previously been a full-time politician for twelve years. This debate brings back many memories.
This is exactly where some political debates end up, especially in the heat of an election campaign.
One draws far-reaching conclusions on insufficient evidence, deliberately misunderstands opponents, constructs straw men, and attacks these.
I have done it myself and I understand why. You are so impassioned by what you believe in. In an election campaign, a lot is at stake.
It is easy to deceive oneself that it is okay to exaggerate a bit.
The end justifies the means.
But it doesn't. And this is not an election campaign.
But for Max Tegmark, I think it is. He told Lex Friedman that he believes humanity will perish and his son will die.
He's not so foolish that he doesn't even understand the criticism. On the contrary, he is very smart.
But Max Tegmark is so worried about AI that he has ended up in an election campaign.
The Angry Optimist