πŸ€– Kevin Kelly: Don't regulate AI based on fear of human extinction

πŸ€– Kevin Kelly: Don't regulate AI based on fear of human extinction

"Just being really smart doesn't mean you can trump and overcome the will to survive of eight billion people."

Mathias Sundin
Mathias Sundin

Share this story!

Sign up for our free, weekly, newsletter with fact-based optimistic news on technology, science and human progress.

In an interview with Warp News on his new book, Excellent Advice for Living, Kevin Kelly also talked about AI and the current debate about the risks of AI.

What is your take on the risks of AI and AI alignment?

"There are different levels of alignment. Aligning the AI's values is legitimate. Making them better than we are, on average, is a worthy thing to work on."

But what about AI so powerful it could hurt or kill us?

"The risk is greater than zero, but so low it shouldn't form our policy very much."

"Recently, they compared it to a pandemic or climate change. I think it's closer to an asteroid impact. An asteroid impact would be devastating, but there is very little chance of that happening. But because it would be disastrous, we should have people in a program trying to spot them, deflect them, and figure out what to do. But we aren't making policy decisions based on the fact that we might have an asteroid impact. We are not making asteroid-proof buildings."

"The same thing with existential risk from AI. We should have some people working on it, but we shouldn't make policies and regulate based on that low probability."

Why do you think the risk is so low?

"The reason I differ from some of the people, like Elon Musk and Eliezer Yudkowsky, is that they tend to overestimate the role of intelligence, in the role of making things happen. They are guys who like to think, and they think, that thinking is the most important thing. In order to make things happen in the world, intelligence is required but it's not the major thing. It's not the smartest people who are making things happen in the world. It is not necessarily the smartest people in the room who make things go forward. Intelligence alone is insufficient to make change in the world. You need to have persistence, you need to have empathy, ingenuity, and resourcefulness. And all kinds of other things."

Kevin Kelly also adds another aspect to AI and existential risk. The will to survive trumps the will to kill.

"What we know from nature, is that the will to survive always trumps the will of predation. Most predators fail in their attempts to kill the prey, because the will to survive is much greater. The will of eight, nine billion people to survive is incredibly stronger than the will to eliminate them. And that will is independent of intelligence. "

"Just being really smart doesn't mean you can trump and overcome the will to survive of eight billion people."

Watch the interview on You Tube

Watch the entire interview with Kevin Kelly on his new book, Excellent Advice for Living, why optimists create the future, why now is the best time to make something, and why you should focus on the biggest opportunities, not the biggest problems.

Also, don't miss Kevin Kelly's essay, The Case for Optimism, here on Warp News.

πŸ’‘ Kevin Kelly: The Case for Optimism
Kevin Kelly is the founder of Wired Magazine and author of several books, among them The Inevitable. For Warp News he presents his case for optimism.

Mathias Sundin
The Angry Optimist

Sign up for our free, weekly, newsletter with fact-based optimistic news on technology, science and human progress.