Hiding Malware inside a model of a neural network

Researchers demonstrated how to hide malware inside an image classifier within a neural network in order to bypass the defense solutions.

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui presented a technique to deliver malware through neural network models to evade the detection without impacting the performance of the network.

Tests conducted by the experts demonstrated how to embed 36.9MB of malware into a 178MB-AlexNet model within 1% accuracy loss, this means that the threat is completely transparent to antivirus engines.

Experts believe that with the massive adoption of artificial intelligence, malware authors will look with an increasing intered in the use of neural networks. We hope this work could provide a referenceable scenario for the defense on neural network-assisted attacks.

neural network

The experts were able to select a layer within an already-trained model (i.e. Image classifier) and then embeds the malware into that layer.

If the model doesn’t have enough neurons to embed malware, the attacker may opt to use an untrained model, which has extra neurons. The attacker then would train the model on the same data set used in the original model in order to produce a model with the same performance.

Experts pointed out that the technique is only effective for the hiding of the malware, not for its execution. In order to run the malware, it must be extracted from the model by using a specific application, that could be hidden inside the model only if it is enough large to contain it.

“We uploaded some of the malware-embedded models to VirusTotal to check whether the malware can be detected. The models were recognized as zip files by VirusTotal. 58 antivirus engines were involved in the detection works, and no suspicious was detected. It means that this method can evade the security scan by common antivirus engines.” states the paper.

As a possible countermeasure, experts recommend the adoption of security software on the end-user device that could detect operations of extracting the malware from the model, its assembly and execution. Experts also warned of supply chain pollution on the providers of the original models.

“The model’s structure remains unchanged when the parameters are replaced with malware bytes, and the malware is disassembled in the neurons. As the characteristics of the malware are no longer available, it can evade detection by common antivirus engines. As neural network models are robust to changes, there are no obvious losses on the performances when it’s well configured.” concludes the paper. “This paper proves that neural networks can also be used maliciously. With the popularity of AI, AI-assisted attacks will emerge and bring new challenges for computer security”

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, neural network)

The post Hiding Malware inside a model of a neural network appeared first on Security Affairs.

If you like the site, please consider joining the telegram channel or supporting us on Patreon using the button below.

Discord

Original Source