Neural Network Models Can Hide Malware, Research Shows

ODSC - Open Data Science
4 min readJan 13, 2022

Neural networks are one of the most exciting developments in data science. These advanced AI models could revolutionize forecasting and fraud detection processes, and likely will before long. That makes the recent discovery that malware can hide within them all the more concerning.

A group of researchers recently revealed that it’s possible to hide at least 36.9 megabytes of malware in neural network models. Despite the considerable size of this malicious data, the network showed only a 1% drop in accuracy, helping it slip by unnoticed. This revelation should raise alarms as businesses continue to rely on these technologies more heavily.

https://odsc.com/boston

How Malware Hides in Neural Networks

The researchers found that malware hides in neural networks by mimicking their structure. Neural networks consist of hundreds of millions of parameters defining multiple layers of artificial neurons. If you embed malware into these parameters, the surrounding connections can conceal it, making it undetectable to malware scanners.

These attacks achieve this by breaking the malware into 3-byte pieces, as any data that small won’t significantly alter a parameter. Attackers can then publish these infected parameters on a database like GitHub, where an unsuspecting target will download it. Alternatively, they could take a more targeted approach by delivering it through software updates to a specific model.

In the experiment, malware scanners could identify all eight infected samples as malware when on their own. However, embedding them in neural network models hid them entirely. The researchers found that 58 common antivirus solutions could not detect the hidden malware.

It’s important to note that this method will only deliver the malware undetected, not activate it. Attackers must use other software to eventually extract the malware from the infected parameters and turn it on. While this extra step does make pulling off these attacks more difficult, it’s still entirely possible.

Potential Impact of Infected Neural Networks

This attack method could have devastating implications if it goes unnoticed by companies creating and using neural network models. AlexNet, the network researchers tested in this experiment, is a popular solution for machine vision applications. If attackers could infect machine vision systems, they could jeopardize biometric security, product safety checks, and even self-driving cars.

Many businesses today rely on AI processes like neural networks, especially in forecasting. Inaccuracies could lead to excessive labor costs, reduced customer satisfaction, and even security issues, and infected neural networks could easily alter forecasts. As companies trust AI forecasting more, they could follow these infected results into financial ruin.

These concerns become more prevalent as neural network adoption grows. Already, 63% of small companies and 51% of artificial intelligence business leaders say AI adoption is growing faster than it should in their industry. If adoption continues to outpace knowledge and experience with these technologies, businesses may overlook critical vulnerabilities like this malware-hiding technique.

How Data Scientists Can Protect Their AI Models

In light of these risks, data scientists must take steps to protect their neural network models. As this experiment shows, today’s malware scanners are insufficient to detect these attacks. While they will likely adapt in response to this news, that may take time, and the risks are too high to assume these solutions are sufficient.

Since the skills gap is one of the leading barriers to AI adoption, many companies use pretrained neural networks. This leaves them vulnerable to these attacks, but they can mitigate that threat by fine-tuning and retraining them. Doing this without freezing the infected parameters will change their value, destroying the embedded malware.

It’s also crucial to secure the neural network development pipeline. This malware delivery method only works if attackers infiltrate delivery processes through public databases or software updates. Companies that tighten controls around these processes can stay safe.

Developers should only use neural networks from trusted sources, and even then, consider adjusting them. It’s also best to implement a system for verifying updates before installing them to prevent infection via supply chain pollution.

Cybersecurity Must Continually Evolve

As technologies like neural networks grow and adapt, so must cybersecurity measures. Any innovation with as much potential for disruption as neural networks is an ideal target for cybercrime. In light of this threat, developers must stay on top of emerging threats to ensure they protect against them.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.