Do you want to publish a course? Click here

SIGL: Securing Software Installations Through Deep Graph Learning

173   0   0.0 ( 0 )
 Added by Xueyuan Han
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Many users implicitly assume that software can only be exploited after it is installed. However, recent supply-chain attacks demonstrate that application integrity must be ensured during installation itself. We introduce SIGL, a new tool for detecting malicious behavior during software installation. SIGL collects traces of system call activity, building a data provenance graph that it analyzes using a novel autoencoder architecture with a graph long short-term memory network (graph LSTM) for the encoder and a standard multilayer perceptron for the decoder. SIGL flags suspicious installations as well as the specific installation-time processes that are likely to be malicious. Using a test corpus of 625 malicious installers containing real-world malware, we demonstrate that SIGL has a detection accuracy of 96%, outperforming similar systems from industry and academia by up to 87% in precision and recall and 45% in accuracy. We also demonstrate that SIGL can pinpoint the processes most likely to have triggered malicious behavior, works on different audit platforms and operating systems, and is robust to training data contamination and adversarial attack. It can be used with application-specific models, even in the presence of new softwa



rate research

Read More

Androids security model severely limits the capabilities of anti-malware software. Unlike commodity anti-malware solutions on desktop systems, their Android counterparts run as sandboxed applications without root privileges and are limited by Androids permission system. As such, PHAs on Android are usually willingly installed by victims, as they come disguised as useful applications with hidden malicious functionality, and are encountered on mobile app stores as suggestions based on the apps that a user previously installed. Users with similar interests and app installation history are likely to be exposed and to decide to install the same PHA. This observation gives us the opportunity to develop predictive approaches that can warn the user about which PHAs they will encounter and potentially be tempted to install in the near future. These approaches could then be used to complement commodity anti-malware solutions, which are focused on post-fact detection, closing the window of opportunity that existing solutions suffer from. In this paper we develop Andruspex, a system based on graph representation learning, allowing us to learn latent relationships between user devices and PHAs and leverage them for prediction. We test Andruspex on a real world dataset of PHA installations collected by a security company, and show that our approach achieves very high prediction results (up to 0.994 TPR at 0.0001 FPR), while at the same time outperforming alternative baseline methods. We also demonstrate that Andruspex is robust and its runtime performance is acceptable for a real world deployment.
With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (e.g., whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task.
The 5G network systems are evolving and have complex network infrastructures. There is a great deal of work in this area focused on meeting the stringent service requirements for the 5G networks. Within this context, security requirements play a critical role as 5G networks can support a range of services such as healthcare services, financial and critical infrastructures. 3GPP and ETSI have been developing security frameworks for 5G networks. Our work in 5G security has been focusing on the design of security architecture and mechanisms enabling dynamic establishment of secure and trusted end to end services as well as development of mechanisms to proactively detect and mitigate security attacks in virtualised network infrastructures. The focus of this paper is on the latter, namely the facilities and mechanisms, and the design of a security architecture providing facilities and mechanisms to detect and mitigate specific security attacks. We have developed and implemented a simplified version of the security architecture using Software Defined Networks (SDN) and Network Function Virtualisation (NFV) technologies. The specific security functions developed in this architecture can be directly integrated into the 5G core network facilities enhancing its security. We describe the design and implementation of the security architecture and demonstrate how it can efficiently mitigate specific types of attacks.
Intuitively, a backdoor attack against Deep Neural Networks (DNNs) is to inject hidden malicious behaviors into DNNs such that the backdoor model behaves legitimately for benign inputs, yet invokes a predefined malicious behavior when its input contains a malicious trigger. The trigger can take a plethora of forms, including a special object present in the image (e.g., a yellow pad), a shape filled with custom textures (e.g., logos with particular colors) or even image-wide stylizations with special filters (e.g., images altered by Nashville or Gotham filters). These filters can be applied to the original image by replacing or perturbing a set of image pixels.
The proliferation of digitization and complexity of connectivity in Cyber-Physical Systems (CPSs) calls for a mechanism that can evaluate the functionality and security of critical infrastructures. In this regard, Digital Twins (DTs) are revolutionizing the CPSs. Driven by asset-centric data, DTs are virtual replicas of physical systems that mirror every facet of a product or process and can provide actionable insights through monitoring, optimization, and prediction. Furthermore, replication and simulation modes in DTs can prevent and detect security flaws in the CPS without obstructing the ongoing operations of the live system. However, such benefits of DTs are based on an assumption about data trust, integrity, and security. Data trustworthiness is considered to be more critical when it comes to the integration and interoperability of multiple components or sub-components among different DTs owned by multiple stakeholders to provide an aggregated view of the complex physical system. Moreover, analyzing the huge volume of data for creating actionable insights in real-time is another critical requirement that demands automation. This article focuses on securing CPSs by integrating Artificial Intelligence (AI) and blockchain for intelligent and trusted DTs. We envision an AI-aided blockchain-based DT framework that can ensure anomaly prevention and detection in addition to responding against novel attack vectors in parallel with the normal ongoing operations of the live systems. We discuss the applicability of the proposed framework for the automotive industry as a CPS use case. Finally, we identify challenges that impede the implementation of intelligence-driven architectures in CPS.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا