Should Artificial General Intelligence Still Be Pursued Despite the Risks?

ODSC - Open Data Science
4 min readMay 17, 2023

--

There are three main types of artificial intelligence (AI). The first is artificial narrow intelligence (ANI), which is very limited in what it can do. Then, there’s artificial general intelligence (AGI), which can perform comparably to humans. Finally, there’s artificial superintelligence (ASI), which surpasses human capabilities.

Many AI researchers agree that AGI is becoming increasingly feasible. However, some worry about the associated risks. Should work in this area continue regardless of the potential dangers?

Now Is the Time for Preparedness

Even though we have yet to achieve AGI, some researchers say it could happen sooner than some people realize. In one case, Microsoft researchers noticed “sparks of artificial general intelligence” when performing early experiments with GPT-4, a large language model (LLM).

They clarified that GPT-4 could handle new and difficult tasks related to math, medicine, law, and more, all without special prompts. Additionally, they said ChatGPT-4’s performance was strikingly close to what humans can do. The researchers believed their experience showed an early, albeit incomplete, example of AGI.

However, one of the risks of many artificial intelligence applications is that you cannot necessarily work backward and determine how a system made a decision. That’s often called the “black-box problem.” In other cases, the details of why an AI algorithm reached a certain outcome are not publicly available because the companies that developed them consider the information proprietary. Critics warn such algorithms could erode democracy. That becomes more likely if used for applications that could change people’s lives.

Improvements in AGI do not automatically mean bad things for society, but they could if people don’t act now to plan how to mitigate those risks. Some companies have collaborated to make AGI progress happen faster via hardware improvements. However, it’s also important that people work together to determine what they’ll do as AGI becomes an ever-closer reality.

For example, the Council of the European Union has adopted a position on the Artificial Intelligence Act that preserves safety and fundamental rights. It applies to any products on the market in areas the Council oversees. Taking stances like that one now is an excellent way to set parameters that reduce the risk of AGI getting out of control or being used for purposes that could harm people.

Understand the Most Likely Risks

Some people get so overwhelmed by the pace of technological progress that they start worrying about scenarios that aren’t likely to come true at all, or at least not anytime soon. However, instead of fixating on those possibilities, they should remember that technology often helps instead of hurts.

A high-tech solution used across an organization can create a single source of truth and eliminate data silos. AI also allows Google to map locations 10 times faster, resulting in better user experiences.

However, technology is like almost anything else in that it comes with risks. The main thing people in the tech industry must do now is to learn about the most likely adverse outcomes and plan to avoid them. Then, they won’t be surprised if those things happen.

Sam Altman is the founder of OpenAI, which developed ChatGPT. He said although his company limits the tool for safety reasons, others in the field won’t take those precautions. Altman believes there’s only a limited amount of time to figure out how to react when that happens.

Altman acknowledged that people could use large language models like ChatGPT to launch or proliferate misinformation, orchestrate cyberattacks, and more. However, using AGI safely means building parameters into the system that limit the chances of those things happening. It’s impossible to stop people from willfully misusing an advanced tool. However, developers can implement safeguards that reduce the chances of this happening.

Problems could arise if people turn a blind eye to the potential risks. However, if they remain aware of them and behave proactively, artificial general intelligence is more likely to stay safe for users and society.

Proceed With Caution

Artificial general intelligence has risks, but people are confident it will be beneficial, too. The possible threats are not reason enough to cease pursuing it. One of the positive things about artificial intelligence being in the mainstream news is that more individuals are aware of it and consider the negative and positive aspects of this progress. Many will also rightfully urge caution, saying we must take care as technology improves.

We still have time to take steps so AGI is as safe and effective as possible. Let’s keep exploring opportunities and safeguards, ensuring we don’t waste them.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet