OpenAI Releases AI Detection Tool to Curb ChatGPT Abuse

ODSC - Open Data Science
3 min readFeb 9, 2023

OpenAI has released an AI detection tool in a bid to curb ChatGPT abuse in academia. The new tool is designed to help teachers and professors to detect if students are using AI to complete assignments. This comes after months of worry from academics about the disruptive nature of AI in education, and weeks after a computer science student created his own AI detection tool. Launched on Tuesday, this is one of the first attempts to curb AI abuse in education, though OpenAI cautions that their tool isn’t foolproof.

According to CNN, the new AI-detecting tool is powered by machine learning. It takes input given to it by the user (in this case a concerned educator) and then categorizes it. From there it will give five different ranks. They range from “likely generated by AI” to “very unlikely.” So it’s clear that like ChatGPT, this tool is designed to enhance and give educators some idea of the text’s origins. As Lama Ahmad, policy research director at OpenAI said of the new feature it should be, “taken with a grain of salt.”

Lama Ahmad went on to state, “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times — much like using AI for any kind of assessment purposes,” He also pointed how the tool is supposed to complement the user, not replace the educator in the detection process, “We are emphasizing how important it is to keep a human in the loop … and that it’s just one data point among many others.”

And of course, due to the severity of punishments associated with academic dishonesty, Ahmad also cautioned that the tool shouldn’t be treated as 100% certain. That’s because as many know, the accusation of plagiarism isn’t light. “Teachers need to be really careful in how they include it in academic dishonesty decisions.” Jan Leike, a lead on the OpenAI alignment team also spoke about why being able to prove AI-generated plagiarism through ChatGPT will be difficult. And that’s because most people don’t simply copy & paste directly from ChatGPT, making it more difficult to identify AI-generated work. Leike said, “be best at identifying text that is very similar to the kind of text that we’ve trained it on.”

CNN also reported that during a demo of this new feature, ChatGPT was able to successfully label several pieces of literature properly. In one example, it took an excerpt from the book “Peter Pan” and identified y it as “unlikely” to be the result of AI. Though in a company blog post, OpenAI stated that the feature still incorrectly labeled human-written text as originating from AI about 5% of the time.

Finally, Lama Ahmad shared how OpenAI sees itself in its role in the overall conversation as, “an educator to the educators,” ensuring that they aren’t left behind by fast-evolving technology. He went on to say that they want them to be, “aware about the technologies and what they can be used for and what they should not be used for…That means giving them the language to speak about it, help them understand the capabilities and the limitations, and then secondarily through them, equip students to navigate the complexities that AI is already introducing in the world.”

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.