China Targets AI-Generated Media With New Watermark Requirement

ODSC - Open Data Science
3 min readDec 27, 2022

--

China has taken the first steps to ensure that AI-generated media is distinguishable. In a report from The Register, China’s Cyberspace Administration issued new regulations. These new rules prohibit the creation of AI-generated media without any clear distinguishing marks and labels. These new rules come as 2022 has proven to be an AI-rich time with AI-generated art, music, and other media taking the internet by storm. China’s Cyberspace Administration, similar to the FCC in the United States, is tasked with regulation and oversight of the internet, though, the Cyberspace Administration also has an extra pillar of censorship as part of their purview.

These new regulations aim to better oversee what it calls “deep synthesis” technology. Think deep fakes which are quite popular online. The government agency’s official website outlined its reasons for issuing the regulation. It took aim at recently popular images, text, video, and other AI. Below is a translation via Google Translate:

“In recent years, deep synthesis technology has developed rapidly. While serving user needs and improving user experience, it has also been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others’ reputations and honor, and to counterfeit others’ identities. Committing fraud, etc., affects the order of communication and social order, damages the legitimate rights and interests of the people, and endangers national security and social stability.

The introduction of the “Regulations” is a need to prevent and resolve security risks, and it is also a need to promote the healthy development of in-depth synthetic services and improve the level of supervision capabilities.”

This is where the idea of distinguishable markings, such as watermarks, comes into play. Under these new regulations, AI-generated media will be required to undergo a security assessment from the government and if approved, will be also required to show obvious markings that make it clear they are AI-generated content. Though many from the outside might not completely understand the concern the Chinese government has, examples of deepfakes online show that the technology, though interesting and novel, is becoming so realistic it’s becoming difficult or at times impossible to tell from real videos.

Below is one example from last year with a deepfake of acting legend Morgan Freeman:

But it’s not just watermarks and a security assessment. Companies who provide these services along with users who generate it must register with accounts using their real names. The expressed purpose of course is to create a trail of who made what. It’s clear that China is quite concerned about the future exploitive possibilities of AI-generated content. Issues of fraud, identity theft and even the possibility of faking historical speeches have all been on the minds of professionals in AI.

Curious about this subject and responsible AI? Follow this link to learn more.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet