πŸ€– No, an AI drone has not refused to obey orders and killed a human

πŸ€– No, an AI drone has not refused to obey orders and killed a human

It was just a thought experiment on what could happen. It hasn't even happened in a simulation.

Mathias Sundin
Mathias Sundin

Share this story!

News media have over the past 24 hours been filled with a report that a military drone controlled by an AI has refused to obey orders and killed the person giving the orders:

"During a military exercise in the US, an AI-controlled drone is said to have ignored orders and instead attacked and killed its own controller."

This is what one news media wrote:

"The AI-programmed drone decided on its own to do as it wished, instead of taking orders from a human. ...
'When the AI system identified a threat, the human controller could sometimes order the system not to kill the threat.
But the system is designed to score points when it kills its threat.
'So what did the system do? It killed its controller. It killed its controller because that person was preventing the system from achieving its goal.'
After the incident, the drone is said to have been programmed with a specific directive: Do not kill the controller who controls you.
But then something else happened.
'It destroyed the communication tower that the operator uses to communicate with the drone', says Tucker "Cinco" Hamilton.

Scary, right?

It's just that it didn't happen.

It happened in a simulation, most media outlets reporting on the incident write.

But that's not true either.

It was just a thought experiment about what could happen. It hasn't even happened in a simulation.

The original story came from Colonel Tucker "Cinco" Hamilton speaking at a conference.

The US Air Force has issued a clarification:

β€œThe Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

The doomsday atmosphere is to blame

A big culprit in the drama of this story getting traction is the exaggerated and ramped-up doomsday rhetoric about AI, which we have heard in recent months.

When the news has been filled with AI supposedly exterminating humanity, few remained critical of this "news." The media jumped on it and people shared it wildly on social media. "It's starting now!"

Doomsday rhetoric is dangerous. People get scared. We become uncritical and agitated. We don't think clearly. Instincts take over.

But theories about the extermination of humanity are just that, theories. There is no actual evidence that such a thing would happen, and certainly not that the probability of it happening is high.

The only thing we have to fear, is fear itself.

Mathias Sundin
The Angry Optimist