New AI Image Detector Tool Being Developed by OpenAI

ODSC - Open Data Science
2 min readMay 9, 2024

OpenAI has announced the launch of a new AI image detector designed to detect images generated by its advanced text-to-image generator, DALL-E 3. This development comes at a time when concerns about the impact of AI-generated content on global elections have intensified.

The newly introduced AI image detector boasts a high accuracy rate, successfully identifying DALL-E 3-created images approximately 98% of the time during internal testing. OpenAI emphasized that the AI image detector is also capable of handling common image modifications like compression, cropping, and saturation changes effectively, ensuring minimal impact on its detection capabilities.

Further, OpenAI is also introducing tamper-resistant watermarking. This new feature will embed a hard-to-remove signal in digital content such as photos or audio, marking them to indicate their AI-generated origin. “This step will bolster the transparency of media, making consumers aware of the content’s nature and origin, thus enhancing trust in digital media,” stated an OpenAI spokesperson.

Since last year, OpenAI has become part of an industry coalition that includes tech giants such as Google, Microsoft, and Adobe. This group is working towards setting a standard that would aid in tracing the origin of various forms of media. The collaboration highlights a significant move towards establishing more accountability in digital content creation.

The timing of these initiatives is particularly pertinent. In India, where general elections are currently underway, the dissemination of fake videos featuring Bollywood actors criticizing Prime Minister Narendra Modi has raised alarms about the role of AI-generated content and deepfakes in influencing public opinion.

Beyond India, the use of AI-generated content to sway electoral processes is a concern that spans several countries, including the U.S., Pakistan, and Indonesia. In response to these challenges, OpenAI, together with Microsoft, has announced the creation of a $2 million “societal resilience” fund. This initiative aims to support AI education and develop more robust defenses against the harmful effects of manipulated digital content.

As digital technologies become more sophisticated, the potential for misuse increases. Our goal is to ensure that innovations like AI serve to enhance societal trust, not diminish it,” said Sam Altman, CEO of OpenAI.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.