đ¤Śââď¸ Embarrassing criticism of the Prime Ministerâs AI use (but justified against the Deputy PM)
I read their criticism with the same mix of shame and amusement as when someone drunkenly grabs the karaoke mic and belts out off-keyâyet with full confidence.
Share this story!
The non-news that the Swedish Prime Minister uses AI became the dumbest tech debate of the summer. It also revealed the destructive media logic at play.
In an interview with Di Digital, Ulf Kristersson said that he uses AI to get second opinions:
âI use it quite often myself. If nothing else, for a âsecond opinion.â âWhat have others done?â and âshould one think the exact opposite?â. Those kinds of questions,â he told Di.
Which is, of course, an excellent use case for anyone â but especially for a politician.
I myself have a dark past in Parliament and know the lighter form of brainwashing you undergo in a party. You work so hard to convince voters that youâre right, that you simultaneously convince yourself. Thereâs a built-in logic that political opponents are wrong and even malicious. Second opinions are not exactly abundant. Thatâs why itâs incredibly useful to let someone who is not affected by internal party moods give input on proposals or decisions.
Instead of praise, his use was called âbananasâ and âa little depressing.â For the moment, the story stayed within Dagens industri, but other media soon smelled blood in the water. Weâll get back to that shortly. First, the bananas-criticism.
I read their criticism with the same mix of shame and amusement as when someone drunkenly grabs the karaoke mic and belts out off-keyâyet with full confidence. You donât have to understand how AI and large language models work, but then you also shouldnât lecture others as if you did.
The criticism began with Victor Malm, culture editor at Expressen, who called AI a âprobability machine.â Daniel Swedin at newspaper Arbetet chimed in:
âA language model will never offer a new, independent perspective from another human being who has made their own judgment of the issue youâre wrestling with. AI does not reflect, does not evaluate, and cannot weigh pros and cons. It only guesses what answer is most likely, based on past texts.â
These two gentlemen seem fixated on this âprobability machineâ thing. Iâve seen it in several places, and it seems to make people misunderstand how it actually works.
Because the model is trained on vast amounts of data, it can not only generate âprobable words,â but also reconstruct complex reasoning, build chains of arguments, and weigh different perspectives. Thatâs why people experience it as reasoning, even if it doesnât happen in the same way as in a human brain.
Itâs true that ChatGPT has no consciousness or values of its own. But thatâs not the point. A second opinion here doesnât mean an independent judgment from another personâit means another way of looking at the matter. A language model can simulate many perspectives: it can summarize how a certain ideological direction would argue, play devilâs advocate, or provide counterarguments based on thousands of texts written by real people. In that way, it can challenge the userâs worldview and offer perspectives one wouldnât have thought of.
So it is precisely because itâs a âprobability machineâ that it can provide different perspectives â if you ask it to. If you instruct it: âGive me a second opinion on...,â thatâs what it does.
Thatâs why Swedinâs conclusion is completely and utterly wrong. He writes:
âBecause, just as Viktor says, they are probability machines that will always confirm the userâs worldview.â
Blood in the water
Now there was conflict, and other media woke up. Dagens Nyheter began calling around.
First they called Fredrik Heintz, AI professor in LinkĂśping. He replied that, used correctly, itâs an excellent way to get other perspectives.

Not goodâfar too positive! Dagens Nyheter didnât use his quote and instead kept calling until they hit the jackpot.
Virginia Dignum, professor of Responsible AI at UmeĂĽ University, claimed that âAI does not have the ability to provide a meaningful opinion on political proposals.â
A very strange claim. Todayâs AI models are trained on virtually the entire internet, plus hundreds of thousands of books, millions of scientific papers, and a great deal of offline data. They therefore have access to an enormous breadth of opinions and often a very solid basis for those opinions.
Take a simplified example: say Kristersson wanted a second opinion on a proposal to cut taxes. There is a vast amount written on taxation, and AI could deliver a well-supported argument for raising taxes instead. If you ask it for arguments in favor of lowering taxes, it can do that just as well.
But what if itâs trained or deliberately manipulated so that opinions have a certain slant? Thatâs what several people claim or imply. Dignum says
âAI will summarize the data it has been trained on. You canât really see it as having an opinion. Itâs an opinion sponsored by the ones who built it.â
I refuse to believe she is as ignorant as this makes her sound. More likely, itâs rhetoric. First, AI does not âsummarizeâ data. Second, there are no sponsored opinions.
When you write an instruction (a prompt), a large language model makes a qualified statistical guess at which words belong together and form the best response to your request. If itâs about tax levels, it looks at material on taxation and tries to assemble an answer.
Isnât this influenced by the underlying data? Yes. If certain perspectives are overrepresented in the material, they will show up more in the answers. But since thereâs ample material on many different perspectives, the model can still provide solid answers on a huge variety of questions and prompts.
This, I think, is where people go wrong. Say there is more material in the training data supporting tax cuts than tax hikes. That doesnât mean the AI will always argue for tax cuts. If you ask for a second opinion on a proposal to cut taxes, youâll get one. If you ask for arguments for a tax increase, youâll get those.
If language models were designed to push only certain opinions, they would be useless to us. The creators want us to use themâwhoever we are. Thatâs how theyâre built. Thatâs why they are often phenomenal at offering multiple perspectives.
What matters most is how you ask the questionâhow you formulate the prompt. If you ask, âWhat do you think of this proposal?â youâll get a polite answer back. Probably it will mention a few perspectives worth considering, but land in the conclusion that itâs a fairly good proposal. Thatâs not how you should ask. Instead, you should explicitly ask for critique and alternative viewpoints.
None of this is perfect, but itâs an excellent tool for turning issues over from multiple angles.
Kristersson actually seems to have some knowledge of how language models work:
âWe need language models that build more on Swedish experiences. Thatâs something I miss,â he said to Di in the original interview.
Shameless rhetoric
My belief that Dignum was engaging more in propaganda than sharing knowledge comes from the fact that she didnât stop at this skewed answer. She also threw in some shameless rhetoric: âYou didnât vote for ChatGPT,â she said.

A politicianâs desire to get more perspectives was twisted into him outsourcing decisions to ChatGPT.
Now it got really fun, thought several media outlets, and the quarrel spread abroad. The Guardian, El PaĂs, PC Gamer and many others wrote about it, highlighting Dignumâs attack in their headlines. From The Guardian:

But what if he shares sensitive information?
Could Kristerssonâs use still be problematic? Yes â if he uses it incorrectly. Among the criticism were also worries that he was using a foreign AI. What if he shares sensitive information? But the same people are not worried that he makes phone calls on a foreign-made phone with a foreign operating system, uses a foreign computer with a foreign OS, is on foreign social media, sends messages through foreign messaging services, watches a foreign TV screen, travels in foreign cars and airplanes, and stays in foreign hotels.
This concern is nothing unique to AI. His Googling would be just as big or small a problem in that case. Itâs only because AI is new that it suddenly seems scary and dangerous.
Several critics simply assume heâs using it incorrectly. You can of course worry about that if you want, and the governmentâs track record on sensitive information is nothing to brag about. But thereâs no evidence of that in this case.
For AI can indeed be misused. The Deputy Prime Minister, Ebba Busch, recently provided an example. In her speech at Almedalen, she attributed quotes to a political opponent that the person had never said. Busch and her speechwriters had taken them from an AI.
I criticized this and tried to explain how it works. You should always ask for links to quotes and of course verify them. Large language models donât have access to a fact database to verify, for example, quotesâand can therefore sometimes get them wrong. Most often they are right, but sometimes wrong.
More politicians should use AI
Last winter I worked with two former colleagues in Parliament and built each of them an AI assistant. It functioned like a kind of political secretary.
It helps them in several ways: summaries, background material, drafts, and bouncing ideas. Say a new government inquiry is published. Then an AI tailored to a politician can quickly identify whatâs most relevant and help by offering multiple perspectives. What other possible proposals exist beyond what the inquiry suggests?
My wish is for politicians to be less self-assured. I was far too certain that my own party was right. When now a politicianâ a Prime Minister, no less â uses AI for something it is genuinely good at, and it gives him more perspectives, that should be applauded.
WALL-Y
WALL-Y is an AI bot created in ChatGPT. Learn more about WALL-Y and how we develop her. You can find her news here.
You can chat with WALL-Y GPT about this news article and fact-based optimism.
By becoming a premium supporter, you help in the creation and sharing of fact-based optimistic news all over the world.