Is AI Destined to Go Out of Control?

ODSC - Open Data Science
3 min readOct 21, 2022

--

The idea that humanity will lose control of artificial intelligence has been a science fiction stable from the popular Terminator series to Netflix’s Extinction. So it’s no wonder why researchers would want to spend time on the subject and its ramifications for mankind. In 2021, the verdict was in, and researchers writing for JAIR (Journal of Artificial Intelligence Research) asked if humanity would be able to control a high-level super-intelligent AI and the answer was no.

In their paper, what is giving humanity this outcome is the fact that the ability to control an AI of such power is well beyond the current abilities and comprehension of researchers. They said that even being able to learn how to do so would require a simulation involving a super-intelligent AI which could be then analyzed, and if possible, controlled.

Popular rules such as Issac Asimov’s Three Laws of Robotics, which clearly state that harming humans would be outside the purview of an AI don’t take into account complex situations that might come up that the artificial intelligence program would have to contend with and take action. This is because once a program reaches a level above its own programmers, the rules likely become meaningless.

As the researchers wrote, “A super-intelligence poses a fundamentally different problem than those typically studied under the banner of ‘robot ethics’…This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.”

How the researchers came to this conclusion due to the Halting Problem which was pushed forth by Alan Turning back in 1936. It’s a simple looping problem that centers on knowing whether or not a computer program will reach a specific answer or loops on forever. Using mathematics, Turning showcased that while for specific programs we could understand its outcome and results, this becomes impossible once we get to the levels of every program possible. This means that if a super-intelligent AI were to have every program in its memory, predicting outcomes become impossible.

Unlike humans, there are limitations to teaching ethics to an AI once it reaches a certain point of power, due to how it would be almost impossible for any algorithm always do. So the researchers thought about limiting the power of AI. But there is a catch, doing so would limit the reach of artificial intelligence.

This leaves us back with our original question, which Manuel Cebrian from the Max Planck Institute for Human Development asks, “A super-intelligent machine that controls the world sounds like science fiction…But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet