Start-Up DuckDuckGoose is Using Artificial Intelligence to Spot Deepfakes
Concern has been growing for some time about the effectiveness of AI-generated deepfakes. So much so, that the government of China recently laid out guidelines requiring watermarks on deepfakes that also include their creator’s information. But now, a new start-up called DuckDuckGoose is using AI to identify AI-generated deepfakes. And if it works, it could lay the groundwork to minimize fears related to deepfake technology.
In a sit-down interview with Innovationorigins.com, DuckDuckGoose co-founder Mark Evenblij spoke about the development of the technology, how it works, and how the AI detector justifies its decisions: “Our detector is an AI system. If you don’t trust certain visual content, you can send a picture to our system. In our system, we have integrated thousands of videos to be able to tell the difference between real and fake. The system identifies a kind of highlight in the video, a part that may not be true. But that’s not all — we’re also very preoccupied with why the system makes certain choices, so it has to be able to justify its conclusions. Next, all that is looked at by human experts to pinpoint the problem and also to figure out why that problem is there. This is reported back to our customers.”
During the interview, Evenblij of DuckDuckGoose also spoke about another major problem related to deepfake technology — the problem of fraud — which is a growing problem. According to Cyber Crime Magazine, the costs of fraud will grow to 10.5 trillion by 2025. Due to the effects of the pandemic, more and more financial institutions have digitized their offerings, even though many use robust data governance and access software. But deepfake technology can basically get around those barriers. Evenblij said, “For instance, since the corona pandemic, more and more banks want you to be able to do everything digitally, but then you are required to identify yourself digitally as well…So that can be exploited by deepfakes and that’s how financial fraud can eventually be committed. That’s what we want to prevent.”
This is a growing issue as more of our lives become tied to the digital world. Technology such as deepfakes are not only poised to endanger the average person but they’re also getting cheaper to produce. Yesterday’s problems are phishing scams, which often target elderly citizens posing as family or the government. Tomorrow could easily be a deepfake scam that mimics the voices and faces of your own family, giving you the impression that they’re reaching out.
Unfortunately, according to Evenblij, one of the biggest obstacles is convincing prospective clients of the severity of the problem. “Because deepfake fraud is a relatively ‘new’ problem, sometimes it’s still a challenge to convince prospective customers of the importance of our tool.” The complete interview is wide-ranging and is worth the read for anyone interested in AI-powered tools used in cybersecurity scenarios. If you’re interested in this subject and responsible AI, OSDC East has the track for you. Check it out and register today!
Originally posted on OpenDataScience.com
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.