OpenAI CEO Sam Altman Warns of Potential Risks in Unregulated AI Development

ODSC - Open Data Science
3 min readMar 31, 2023

--

Last week, during an interview with ABC News, OpenAI CEO Sam Altman expressed concerns about the safety of artificial intelligence technology in the hands of bad actors. During the interview, he warned of the risks associated with other AI developers who may not put safety limits on their AI tools. OpenAI, the group behind ChatGPT and GPT-4, has helped to usher in an AI revolution both in data science and the public’s imagination at large thanks to the chatbot’s ease of use.

Although OpenAI has the advantage of Microsoft as a major investor, Altman worries that the competition may not be as concerned with safety. Other companies are racing to offer ChatGPT-like tools, and the field is maturing quickly. OpenAI’s decision to reveal little about GPT-4’s inner workings has raised questions about the name “OpenAI,” but the company’s priority is safety.

Sam Altman believes that society has a limited amount of time to figure out how to react to bad actors who may use AI technology to spread disinformation or launch offensive cyberattacks. To address these concerns, OpenAI has shared a “system card” document that outlines how it tested GPT-4 for dangerous information, such as basic chemicals and kitchen supplies, and the team fixed the issues before the product’s launch.

The OpenAI CEO has been forthcoming about the dangers posed by AI technology, despite leading a company that sells AI tools. OpenAI was established as a nonprofit in 2015 with a focus on the safe and transparent development of AI. It switched to a hybrid “capped-profit” model in 2019, with Microsoft becoming a major investor. But it seems clear that concerns about ethics and responsible AI are on the minds of many within the organization based on these comments.

During the interview, he also warned the public about the dangers of AI technology while pressing ahead with OpenAI’s work. He believes that regulation will be critical to ensuring the safe development of AI technology and that society needs time to understand how people want to use these tools and how they can co-evolve. This was echoed earlier this year when a member of the U.S. House of Representatives delivered a speech completely generated by Chat-GPT.

The hope was to draw attention to the need for the legislative body to become educated on AI and its possible societal effects. So far, the EU, the United States, and China have all made moves to either regulate AI, or explore the possibilities of how to regulate AI. This comes as the technology has seen an exposition of scale, entering almost every industry and most facets of human life.

Watch the full interview here:

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.