🤦‍♂️ Embarrassing criticism of the Prime Minister’s AI use (but justified against the Deputy PM)

🤦‍♂️ Embarrassing criticism of the Prime Minister’s AI use (but justified against the Deputy PM)

I read their criticism with the same mix of shame and amusement as when someone drunkenly grabs the karaoke mic and belts out off-key—yet with full confidence.

Mathias Sundin
Mathias Sundin

Share this story!

The non-news that the Swedish Prime Minister uses AI became the dumbest tech debate of the summer. It also revealed the destructive media logic at play.

In an interview with Di Digital, Ulf Kristersson said that he uses AI to get second opinions:

“I use it quite often myself. If nothing else, for a ‘second opinion.’ ‘What have others done?’ and ‘should one think the exact opposite?’. Those kinds of questions,” he told Di.

Which is, of course, an excellent use case for anyone — but especially for a politician.

I myself have a dark past in Parliament and know the lighter form of brainwashing you undergo in a party. You work so hard to convince voters that you’re right, that you simultaneously convince yourself. There’s a built-in logic that political opponents are wrong and even malicious. Second opinions are not exactly abundant. That’s why it’s incredibly useful to let someone who is not affected by internal party moods give input on proposals or decisions.

Instead of praise, his use was called “bananas” and “a little depressing.” For the moment, the story stayed within Dagens industri, but other media soon smelled blood in the water. We’ll get back to that shortly. First, the bananas-criticism.

I read their criticism with the same mix of shame and amusement as when someone drunkenly grabs the karaoke mic and belts out off-key—yet with full confidence. You don’t have to understand how AI and large language models work, but then you also shouldn’t lecture others as if you did.

The criticism began with Victor Malm, culture editor at Expressen, who called AI a “probability machine.” Daniel Swedin at newspaper Arbetet chimed in:

“A language model will never offer a new, independent perspective from another human being who has made their own judgment of the issue you’re wrestling with. AI does not reflect, does not evaluate, and cannot weigh pros and cons. It only guesses what answer is most likely, based on past texts.”

These two gentlemen seem fixated on this “probability machine” thing. I’ve seen it in several places, and it seems to make people misunderstand how it actually works.

Because the model is trained on vast amounts of data, it can not only generate “probable words,” but also reconstruct complex reasoning, build chains of arguments, and weigh different perspectives. That’s why people experience it as reasoning, even if it doesn’t happen in the same way as in a human brain.

It’s true that ChatGPT has no consciousness or values of its own. But that’s not the point. A second opinion here doesn’t mean an independent judgment from another person—it means another way of looking at the matter. A language model can simulate many perspectives: it can summarize how a certain ideological direction would argue, play devil’s advocate, or provide counterarguments based on thousands of texts written by real people. In that way, it can challenge the user’s worldview and offer perspectives one wouldn’t have thought of.

So it is precisely because it’s a “probability machine” that it can provide different perspectives — if you ask it to. If you instruct it: “Give me a second opinion on...,” that’s what it does.

That’s why Swedin’s conclusion is completely and utterly wrong. He writes:

“Because, just as Viktor says, they are probability machines that will always confirm the user’s worldview.”

Blood in the water

Now there was conflict, and other media woke up. Dagens Nyheter began calling around.

First they called Fredrik Heintz, AI professor in Linköping. He replied that, used correctly, it’s an excellent way to get other perspectives.

Not good—far too positive! Dagens Nyheter didn’t use his quote and instead kept calling until they hit the jackpot.

Virginia Dignum, professor of Responsible AI at Umeå University, claimed that “AI does not have the ability to provide a meaningful opinion on political proposals.”

A very strange claim. Today’s AI models are trained on virtually the entire internet, plus hundreds of thousands of books, millions of scientific papers, and a great deal of offline data. They therefore have access to an enormous breadth of opinions and often a very solid basis for those opinions.

Take a simplified example: say Kristersson wanted a second opinion on a proposal to cut taxes. There is a vast amount written on taxation, and AI could deliver a well-supported argument for raising taxes instead. If you ask it for arguments in favor of lowering taxes, it can do that just as well.

But what if it’s trained or deliberately manipulated so that opinions have a certain slant? That’s what several people claim or imply. Dignum says

“AI will summarize the data it has been trained on. You can’t really see it as having an opinion. It’s an opinion sponsored by the ones who built it.”

I refuse to believe she is as ignorant as this makes her sound. More likely, it’s rhetoric. First, AI does not “summarize” data. Second, there are no sponsored opinions.

When you write an instruction (a prompt), a large language model makes a qualified statistical guess at which words belong together and form the best response to your request. If it’s about tax levels, it looks at material on taxation and tries to assemble an answer.

Isn’t this influenced by the underlying data? Yes. If certain perspectives are overrepresented in the material, they will show up more in the answers. But since there’s ample material on many different perspectives, the model can still provide solid answers on a huge variety of questions and prompts.

This, I think, is where people go wrong. Say there is more material in the training data supporting tax cuts than tax hikes. That doesn’t mean the AI will always argue for tax cuts. If you ask for a second opinion on a proposal to cut taxes, you’ll get one. If you ask for arguments for a tax increase, you’ll get those.

If language models were designed to push only certain opinions, they would be useless to us. The creators want us to use them—whoever we are. That’s how they’re built. That’s why they are often phenomenal at offering multiple perspectives.

What matters most is how you ask the question—how you formulate the prompt. If you ask, “What do you think of this proposal?” you’ll get a polite answer back. Probably it will mention a few perspectives worth considering, but land in the conclusion that it’s a fairly good proposal. That’s not how you should ask. Instead, you should explicitly ask for critique and alternative viewpoints.

None of this is perfect, but it’s an excellent tool for turning issues over from multiple angles.

Kristersson actually seems to have some knowledge of how language models work:
“We need language models that build more on Swedish experiences. That’s something I miss,” he said to Di in the original interview.

Shameless rhetoric

My belief that Dignum was engaging more in propaganda than sharing knowledge comes from the fact that she didn’t stop at this skewed answer. She also threw in some shameless rhetoric: “You didn’t vote for ChatGPT,” she said.

A politician’s desire to get more perspectives was twisted into him outsourcing decisions to ChatGPT.

Now it got really fun, thought several media outlets, and the quarrel spread abroad. The Guardian, El País, PC Gamer and many others wrote about it, highlighting Dignum’s attack in their headlines. From The Guardian:

FrĂĽn The Guardian.

But what if he shares sensitive information?

Could Kristersson’s use still be problematic? Yes — if he uses it incorrectly. Among the criticism were also worries that he was using a foreign AI. What if he shares sensitive information? But the same people are not worried that he makes phone calls on a foreign-made phone with a foreign operating system, uses a foreign computer with a foreign OS, is on foreign social media, sends messages through foreign messaging services, watches a foreign TV screen, travels in foreign cars and airplanes, and stays in foreign hotels.

This concern is nothing unique to AI. His Googling would be just as big or small a problem in that case. It’s only because AI is new that it suddenly seems scary and dangerous.

Several critics simply assume he’s using it incorrectly. You can of course worry about that if you want, and the government’s track record on sensitive information is nothing to brag about. But there’s no evidence of that in this case.

For AI can indeed be misused. The Deputy Prime Minister, Ebba Busch, recently provided an example. In her speech at Almedalen, she attributed quotes to a political opponent that the person had never said. Busch and her speechwriters had taken them from an AI.

I criticized this and tried to explain how it works. You should always ask for links to quotes and of course verify them. Large language models don’t have access to a fact database to verify, for example, quotes—and can therefore sometimes get them wrong. Most often they are right, but sometimes wrong.

More politicians should use AI

Last winter I worked with two former colleagues in Parliament and built each of them an AI assistant. It functioned like a kind of political secretary.

It helps them in several ways: summaries, background material, drafts, and bouncing ideas. Say a new government inquiry is published. Then an AI tailored to a politician can quickly identify what’s most relevant and help by offering multiple perspectives. What other possible proposals exist beyond what the inquiry suggests?

My wish is for politicians to be less self-assured. I was far too certain that my own party was right. When now a politician— a Prime Minister, no less — uses AI for something it is genuinely good at, and it gives him more perspectives, that should be applauded.

WALL-Y
WALL-Y is an AI bot created in ChatGPT. Learn more about WALL-Y and how we develop her. You can find her news here.
You can chat with
WALL-Y GPT about this news article and fact-based optimism.