How To Make Your Deep Learning Process More Secure

ODSC - Open Data Science
5 min readApr 24, 2019

--

Threats to security evolve with each new technology. History shows us this. Now that deep learning is on the rise, unique threats that both use and exploit deep learning paradigms are gaining traction.

If your organization is involved in deep learning, the threats are going to change. Here’s how you can begin to minimize those threats for a more secure deep learning framework.

How Is Deep Learning Different?

Deep learning mimics the human brain’s neural network, creating its own logic and rules as it learns. Where machine learning in the past required meticulous and time-consuming training only to fail half the time, deep learning is faster and more accurate.

That’s great, right? It is, but as with all good things, there’s a dark side. What you gain in accuracy, you lose in transparency, a concept known as the “AI Black Box.

[Related article: Deep Learning for Business: 5 Use Cases]

Much like we know but don’t understand how our own brains work, even deep learning developers struggle to make sense of how their projects work. It’s difficult, if not impossible, for us to decode and follow the billions of bits of data AI processes and how the system comes to the conclusions it does.

This creates a unique environment for security threats.

  • Deep learning is heavily reliant on data — With machine learning, the analyst is heavily in control of the data used for training sets, but with deep learning, that data widens. Deep learning is only as good (or bad) as the data sets.
  • Deep learning has no transparency — AI creates its own behavioral rules based on that training. When something goes awry, it’s challenging to identify the cause (much like human behavior).

Why Should I Consider Security?

Aside from the assumption that security is always paramount to any new tech, deep learning creates parameters in which security is the difference between a reliable, consistent model, and something that has gone rogue.

Deep learning does mimic the brain, but it’s still capable of making seemingly illogical mistakes that a human would never fall for. Humans do have their own kind of irrationality, but our skills are still unmatched by typical AI. This opens up AI to unique forms of attacks.

http://bit.ly/2GmA5hj

What Kind Of Attacks?

Let’s take a look at a few common things that may occur as more organizations adopt deep learning.

1. Adversarial Attacks

Simple tweaks to infrastructure can cause AI to wrongly identify objects or misidentify them altogether. Typical examples include software designed to flag child pornography identifying sand dunes as a human body and a study performed by researchers who were successfully able to render stop signs invisible using a handful of black and white stickers. Humans would never have fallen for either.

We haven’t seen any real adversarial attacks yet, but that doesn’t mean the threat isn’t real. Researchers are already studying ways to minimize the effects of adversarial attacks including the rise of “explainable AI” which could make deep learning more transparent (and more easily fixed) and the use of generative adversarial networks (GAN) to pit AI frameworks against each other to identify and patch weaknesses.

[Related article: Deep Learning for Text Classification]

2. Data Poisoning

Remember the infamous Microsoft Twitter Bot? The one that took just 24 hours in the dregs of the internet to begin spewing hate speech? AI is only as good as its data, and that was a high profile, embarrassing case of data poisoning.

AI often uses publically available data, a boon for time, but terrible for identifying quality data. Public data is often unlabeled, leaving AI to sort through both the good and the bad, creating behavioral rules based on the data itself with no oversight. Get enough bad data, and the effect is disastrous.

Users could exploit this weakness by slowly attuning the deep learning algorithms to the target behavior, eventually hiding in plain sight. Fake news is an excellent example of this. As users flock to the sensationalized aspects of fake news, algorithms learn to recommend that content, prompting sites like Reddit to search for ways to minimize the damage without blocking free speech.

3. Malware

Potential threats could include better, more adaptive malware. For example, phishing schemes get a boost from deep learning by identifying patterns and exploiting them. Hackers can embed algorithms within a company’s neural networks where they lay undetected by gathering massive amounts of information. The result? Hyperrealistic emails embedded with malware or other similar attacks.

So What Do I Do?

Deep learning security attacks aren’t limited to these examples, but it’s important to put security at the front of your deep learning model. Many attacks happen because users aren’t taking basic precautions. Let’s start here:

  • Perform an audit: Find out what assets you have and what steps you need to take to secure those assets and then maintain that security. Small companies are often guilty of believing that because they don’t have sensitive financial data, they don’t need to worry about hacks, but many security attacks happen because hackers are looking for behavioral data. They want your columns, not your rows.
  • Attack Yourself: Using things like GANS can pit systems against each other to identify weaknesses. Ethical hacking can also expose and then patch potential breaches.
  • Use Real-World Data: We know. You did use real-world data. Your facial recognition software was designed with a variety of lighting conditions, but what about skin tones? Tools such as IBM’s ART can help you assess how real your data actually is.
  • Consider Explainability: While it’s possible that explainability can make it easier for attacks to occur, having other security protocols in place can reduce that chance. Explainable AI makes it possible for your program to tell you how it came to the conclusions it did, opening the opportunity for retraining and patching breeches.

A New Wave of Cyber Attacks

New tech and security threats are locked in an eternal battle for dominance, and with each security improvement comes a new threat. Keeping ahead of those breaches means acknowledging that your organization does have the potential for attacks regardless of your size or fame. Don’t wait until you’ve had an issue. Build security into your AI culture from the very beginning.

Editor’s note: Attend ODSC East 2019 in Boston this April 30 to May 3 to learn more about deep learning, security, and business applications!

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet