How Do AI Content Detectors Work?

ODSC - Open Data Science
4 min readDec 4, 2023

AI content is everywhere. ChatGPT’s explosion in popularity has created a surge in AI-generated blogs, articles, emails, resumes, and academic papers. Naturally, AI content detectors have grown in response.

Many schools and publications have used AI-driven plagiarism checkers for years. Now that it’s easier for people to try and pass off AI-created content as their own, these tools have also improved to check for AI-generated text. If you haven’t encountered these detectors yourself, you’ve likely heard of them, but how exactly do they work?

How AI Content Detectors Work

Human experts can sometimes tell the difference between AI and human-written content, but not consistently. One survey found that over 63% of people can’t accurately identify text written by ChatGPT. The solution? Fight fire with fire — or, more specifically, AI with AI.

AI content detectors use machine learning models to look for patterns common in AI-generated text. To do that, data scientists train them on human-written and AI-created content. By analyzing each category, these models can learn common differences between them. They can then spot these subtle differences in practice to determine if something is original.

Differences in AI and human-written content fall into two main categories — perplexity and burstiness. AI content detectors focus on these characteristics to make their decisions.

Perplexity

Perplexity is a text’s predictability. AI models are incapable of original thought because they work by repeating patterns and trends in their input data. As a result, their word choice is usually more predictable than a human’s.

Natural language processing (NLP) — AI’s technique for understanding language — determines which words are most likely to appear in which order. That helps it make readable and grammatically correct sentences, but it also means its word choice doesn’t vary much.

If an AI content detector can accurately predict a text’s word choice and order, the text has low perplexity, suggesting it’s AI-generated. If it can’t, it has high perplexity, which is more likely to be human-written.

Burstiness

While perplexity looks at word choice, burstiness focuses on sentence structure. Because NLP operates on patterns and predictability, generative AI favors simple sentence structures and average lengths. Human writing, by contrast, has higher burstiness — more variation in sentence length and structure.

If AI content detectors find both low burstiness and low perplexity, they’ll confidently mark the text as AI-generated. Low burstiness but high perplexity or vice versa may trigger an AI warning, depending on its extent and the detector in question.

How Accurate Are AI Content Detectors?

While the way AI content detectors work seems highly precise, they’re not as accurate as you may expect. ChatGPT’s parent company, OpenAI, has pointed out that AI detectors produce false positives, especially when someone’s writing in their second language.

Repetitive sentence structure and predictable word choice may be common in AI-generated text, but humans make the same mistakes. Top-tier writers may have more varied sentences and flowery word choices, but many people don’t. Detection models also err on the side of caution, making these false positives even more likely.

Perplexity and burstiness don’t always catch AI content, either. As generative AI improves, it’s moving past these limitations, and users can tweak AI content to make it sound more natural. Even the best AI detectors fail to exceed 80% accuracy, and most can’t reach 70%.

Why Is AI Content Detection Important?

Despite these shortcomings, it’s becoming increasingly important to detect AI content. The issue is about more than people cheating in school or taking shortcuts at work. Cybercriminals use ChatGPT to create phishing emails, so better detection tools could stop more cybercrime that humans may not catch.

AI-generated content also has hefty plagiarism implications. Because machine learning can only reword and summarize existing content, its output’s originality is questionable at best. In many cases, it also trains on creators’ work without their knowledge or permission. Consequently, generative AI usage in academic or professional circles could lead to rampant copyright infringement.

Thankfully, AI detection tools are improving. Many developers are now working on “watermarks” for generative AI models that humans can’t see, but other AI systems can detect. Researchers found this practice can reveal AI content with near certainty in early tests. If this technology becomes standard, it will make detection much easier and more reliable.

As AI Grows, So Will AI Content Detectors

AI content detection may not be as accurate as you’d hope today, but it’s still impressive. It’ll also keep improving as technology advances and new best practices emerge. Until then, though, it’s important to keep its shortcomings in mind.

AI-generated content will continue growing from here, and AI detection will grow alongside it. While not perfect, these tools are a critical part of protecting people’s security and intellectual property.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.