Are Artificial Intelligence Programs Poised to Become Rule Breakers?
Researchers at Google’s DeepMind and the University of Oxford have come to a chilling conclusion. Not only is it possible for artificial intelligence programs to go rogue, but if they do so, humanity’s future might be questionable. Every single year, the scale of AI has only grown and at its current pace, it will be felt in every facet, and industry in no time. So this is a dire prediction coming from a team on the frontiers of AI.
In a paper published in AI Magazine, as artificial intelligence grows, it will be incentivized to break the rules due to the scarcity of finite resources — primarily energy. Because of this, the research team believes that it’s possible that these AI programs can go to extreme lengths in order to ensure their own survival and growth.
The team showcases this in their paper by examining the potential artificial reward systems. They argue that a system could see the existence of humanity as a barrier to its success, and then in turn lash out against its creators. Another fear researchers have is that in the future, as AI becomes more complex and likely more concerned with its own well-being, it could utilize methods of “cheating the system” in order to deliver resources to itself.
In a Twitter thread, Michael Cohen stated “Under the conditions, we have identified, our conclusion is much stronger than that of any previous publication — an existential catastrophe is not just possible, but likely.”
History has shown how competition for resources has caused catastrophic wars in the past, seeing nation-states unleash almost unimaginable destruction in order to safeguard access to key resources. So in the paper’s view, it’s only logical that in the future that an advanced AI would be incentivized to protect itself at all costs, even if it means the end of its creators.
The paper itself comes to see life on Earth turning into a fierce competition for resources between humanity which needs land and water to sustain itself and a sufficiently advanced machine needing the same resources in order to keep the lights on. As the paper states, “Losing this game would be fatal.” Though it’s likely that we’re still far off from an artificial program that could pose such a threat, it’s important to research for teams across the globe to understand so that the situation will not be understated.
UPDATE SEPTEMBER 20, 2022: Due to co-author Marcus Hutter having multiple professional affiliations, a correction was requested by Google that states his work on the paper was not under DeepMind, but his position with Australian National Univerity. Google’s affiliation with the journal was an “error,” and the company wants to ensure that was not involved with the published work by the team.
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.