Member-only story

Adversarial Attacks on Deep Neural Networks

ODSC - Open Data Science
5 min readJul 12, 2019

--

Our deep neural networks are powerful machines, but what we don’t understand can hurt us. As sophisticated as they are, they’re highly vulnerable to small attacks that can radically change their outputs. As we go deeper into the capabilities of our networks, we must examine how these networks really work to build more robust security.

At ODSC East 2019, Sihem Romdhani of Veeva Systems outlined how these networks are still highly vulnerable despite their power and how it’s precisely their mysterious operations that makes it so challenging to build safer networks. We can’t continue to rush towards bigger, deeper models without sufficient security or we will pay the price.

What is an Adversarial Attack?

Humans are great at filtering out noise and perturbations. However, deep neural networks are extremely literal, and it still takes very little noise to fool a trained network. While we may agree that two pictures are indeed pigs, a small amount of noise imperceptible to our eye could cause the computer to believe that one is a pig and one is an airliner.

The most common type of neural network is a convolutional neural network. It uses connected layers to create a classification through training. We can manipulate images by using our knowledge of the training model and the…

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet