Do you want to publish a course? Click here

Distillation for run-time malware process detection and automated process killing

67   0   0.0 ( 0 )
 Added by Matilda Rhode
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Adversaries are increasingly motivated to spend energy trying to evade automatic malware detection tools. Dynamic analysis examines the behavioural trace of malware, which is difficult to obfuscate, but the time required for dynamic analysis means it is not typically used in practice for endpoint protection but rather as an analysis tool. This paper presents a run-time model to detect malicious processes and automatically kill them as they run on a real endpoint in use. This approach enables dynamic analysis to be used to prevent harm to the endpoint, rather than to analyse the cause of damage after the event. Run-time detection introduces the risk of malicious damage to the endpoint and necessitates that malicious processes are detected and killed as early as possible to minimise the opportunities for damage to take place. A distilled machine learning model is used to improve inference speed whilst benefiting from the parameters learned by larger, more computationally intensive model. This paper is the first to focus on tangible benefits of process killing to the user, showing that the distilled model is able to prevent 86.34% of files being corrupted by ransomware whilst maintaining a low false positive rate for unseen benignware of 4.72%.

rate research

Read More

We present BPFroid -- a novel dynamic analysis framework for Android that uses the eBPF technology of the Linux kernel to continuously monitor events of user applications running on a real device. The monitored events are collected from different components of the Android software stack: internal kernel functions, system calls, native library functions, and the Java API framework. As BPFroid hooks these events in the kernel, a malware is unable to trivially bypass monitoring. Moreover, using eBPF doesnt require any change to the Android system or the monitored applications. We also present an analytical comparison of BPFroid to other malware detection methods and demonstrate its usage by developing novel signatures to detect suspicious behavior that are based on it. These signatures are then evaluated using real apps. We also demonstrate how BPFroid can be used to capture forensic artifacts for further investigation. Our results show that BPFroid successfully alerts in real time when a suspicious behavioral signature is detected, without incurring a significant runtime performance overhead.
The evolution of mobile malware poses a serious threat to smartphone security. Today, sophisticated attackers can adapt by maximally sabotaging machine-learning classifiers via polluting training data, rendering most recent machine learning-based malware detection tools (such as Drebin, DroidAPIMiner, and MaMaDroid) ineffective. In this paper, we explore the feasibility of constructing crafted malware samples; examine how machine-learning classifiers can be misled under three different threat models; then conclude that injecting carefully crafted data into training data can significantly reduce detection accuracy. To tackle the problem, we propose KuafuDet, a two-phase learning enhancing approach that learns mobile malware by adversarial detection. KuafuDet includes an offline training phase that selects and extracts features from the training set, and an online detection phase that utilizes the classifier trained by the first phase. To further address the adversarial environment, these two phases are intertwined through a self-adaptive learning scheme, wherein an automated camouflage detector is introduced to filter the suspicious false negatives and feed them back into the training phase. We finally show that KuafuDet can significantly reduce false negatives and boost the detection accuracy by at least 15%. Experiments on more than 250,000 mobile applications demonstrate that KuafuDet is scalable and can be highly effective as a standalone system.
We present and evaluate a large-scale malware detection system integrating machine learning with expert reviewers, treating reviewers as a limited labeling resource. We demonstrate that even in small numbers, reviewers can vastly improve the systems ability to keep pace with evolving threats. We conduct our evaluation on a sample of VirusTotal submissions spanning 2.5 years and containing 1.1 million binaries with 778GB of raw feature data. Without reviewer assistance, we achieve 72% detection at a 0.5% false positive rate, performing comparable to the best vendors on VirusTotal. Given a budget of 80 accurate reviews daily, we improve detection to 89% and are able to detect 42% of malicious binaries undetected upon initial submission to VirusTotal. Additionally, we identify a previously unnoticed temporal inconsistency in the labeling of training datasets. We compare the impact of training labels obtained at the same time training data is first seen with training labels obtained months later. We find that using training labels obtained well after samples appear, and thus unavailable in practice for current training data, inflates measured detection by almost 20 percentage points. We release our cluster-based implementation, as well as a list of all hashes in our evaluation and 3% of our entire dataset.
Large software platforms (e.g., mobile app stores, social media, email service providers) must ensure that files on their platform do not contain malicious code. Platform hosts use security tools to analyze those files for potential malware. However, given the expensive runtimes of tools coupled with the large number of exchanged files, platforms are not able to run all tools on every incoming file. Moreover, malicious parties look to find gaps in the coverage of the analysis tools, and exchange files containing malware that exploits these vulnerabilities. To address this problem, we present a novel approach that models the relationship between malicious parties and the security analyst as a leader-follower Stackelberg security game. To estimate the parameters of our model, we have combined the information from the VirusTotal dataset with the more detailed reports from the National Vulnerability Database. Compared to a set of natural baselines, we show that our model computes an optimal randomization over sets of available security analysis tools.
138 - Paul Maxwell , David Niblick , 2020
Cybersecurity continues to be a difficult issue for society especially as the number of networked systems grows. Techniques to protect these systems range from rules-based to artificial intelligence-based intrusion detection systems and anti-virus tools. These systems rely upon the information contained in the network packets and download executables to function. Side channel information leaked from hardware has been shown to reveal secret information in systems such as encryption keys. This work demonstrates that side channel information can be used to detect malware running on a computing platform without access to the code involved.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا