ﻻ يوجد ملخص باللغة العربية
While artificial intelligence (AI) is widely applied in various areas, it is also being used maliciously. It is necessary to study and predict AI-powered attacks to prevent them in advance. Turning neural network models into stegomalware is a malicious use of AI, which utilizes the features of neural network models to hide malware while maintaining the performance of the models. However, the existing methods have a low malware embedding rate and a high impact on the model performance, making it not practical. Therefore, by analyzing the composition of the neural network models, this paper proposes new methods to embed malware in models with high capacity and no service quality degradation. We used 19 malware samples and 10 mainstream models to build 550 malware-embedded models and analyzed the models performance on ImageNet dataset. A new evaluation method that combines the embedding rate, the model performance impact and the embedding effort is proposed to evaluate the existing methods. This paper also designs a trigger and proposes an application scenario in attack tasks combining EvilModel with WannaCry. This paper further studies the relationship between neural network models embedding capacity and the model structure, layer and size. With the widespread application of artificial intelligence, utilizing neural networks for attacks is becoming a forwarding trend. We hope this work can provide a reference scenario for the defense of neural network-assisted attacks.
Modern neural networks often contain significantly more parameters than the size of their training data. We show that this excess capacity provides an opportunity for embedding secret machine learning models within a trained neural network. Our novel
Recently, cyber-attacks have been extensively seen due to the everlasting increase of malware in the cyber world. These attacks cause irreversible damage not only to end-users but also to corporate computer systems. Ransomware attacks such as WannaCr
Balancing the needs of data privacy and predictive utility is a central challenge for machine learning in healthcare. In particular, privacy concerns have led to a dearth of public datasets, complicated the construction of multi-hospital cohorts and
Deep learning has been used in the research of malware analysis. Most classification methods use either static analysis features or dynamic analysis features for malware family classification, and rarely combine them as classification features and al
Data hiding is referred to as the art of hiding secret data into a digital cover for covert communication. In this letter, we propose a novel method to disguise data hiding tools, including a data embedding tool and a data extraction tool, as a deep