ODSC West 2019 Keynote Dawn Song on AI and Security

ODSC - Open Data Science
3 min readNov 26, 2019

--

The stakes are higher than ever now for AI and security.

Following Sepideh Seifzadeh’s keynote on managing the AI lifecycle, Dawn Song of the University of California’s BAIR Lab took the stage to discuss an important but often overlooked component of the AI lifecycle: Security, specifically with deep learning, and how the stakes are becoming higher as AI becomes more intelligent.

We often hear about various security concerns in the news, seemingly every day in regards to a hack affecting our personal information. Though, now both the hackers themselves and AI systems are becoming significantly more sophisticated, and data scientists are tasked with finding new ways to mitigate, prevent, and remedy these advancements. “Our current framework is insufficient for protecting data rights and privacy,” she said.

Dr. Song brought up a fascinating example of modern technological affordances and its susceptibility to being hacked. Self-driving cars are all but an inevitability in the near future, and while they will be proof of how far we’ve come with technology, they will also bring with them new avenues of which to be hacked and interfered with.

Computer vision models are complex and advanced, though generally rely on training sets that are clear; when trying to teach a model how to learn what a dog is, you provide pictures of what is clearly a dog, not a dog in a costume or of a cartoon dog. In the case with self-driving cars, Dr. Song showed how a car driving past a STOP sign can clearly recognize a clean sign, but misinterpreted a sign with markings on it as a 45 MPH sign.

Sure, this example doesn’t represent a hack, but it shows the changing times. If a small design change can completely alter how a CV model views things, then how easy would it be for a hacker to alter a training set?

This is why the stakes are higher. Hackers can now attack the integrity of the AI itself, meaning the AI could be forced to produce incorrect results, or even be used to attack other AI. This is what leads to openings being found for said hackers to access and release confidential information like credit card info, social security numbers, and so on.

To illustrate how easily some deep learning systems can be infiltrated and altered, Dr.Song brought up the example of using deep reinforcement learning to play the video game Pong. When the DRL model works as it should, then it will likely never lose in a game of Pong. However, when a hacker adds in a tiny new pixel, then that’s enough to throw off the algorithm and cause it to lose.

It sounds like I’m saying that AI is unsophisticated and easily-altered, but that’s not the moral of the story here. What these vulnerabilities show is that yes, when things work well, then we will have advanced DL algorithms that can complete impressive tasks. But when a hacker comes in with poisonous data or another method of attack, then we know it’s time for more security measures to be taken. We have the technology, and how it’s time to bolster our defenses and prevent invasive agents. This is what Dr. Song and her team are working on — using DL to create sophisticated countermeasures to hackers, to keep these powerful algorithms from being tampered with.

To learn more about her research, check out her various resources here and see how she and her team are protecting the future of AI.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet