No Arabic abstract
Over the last decade, Programmable Logic Controllers (PLCs) have been increasingly targeted by attackers to obtain control over industrial processes that support critical services. Such targeted attacks typically require detailed knowledge of system-specific attributes, including hardware configurations, adopted protocols, and PLC control-logic, i.e. process comprehension. The consensus from both academics and practitioners suggests stealthy process comprehension obtained from a PLC alone, to conduct targeted attacks, is impractical. In contrast, we assert that current PLC programming practices open the door to a new vulnerability class based on control-logic constructs. To support this, we propose the concept of Process Comprehension at a Distance (PCaaD), as a novel methodological and automatable approach for system-agnostic exploitation of PLC library functions, leading to the targeted exfiltration of operational data, manipulation of control-logic behavior, and establishment of covert command and control channels through unused memory. We validate PCaaD on widely used PLCs, by identification of practical attacks.
Penetration testing is a well-established practical concept for the identification of potentially exploitable security weaknesses and an important component of a security audit. Providing a holistic security assessment for networks consisting of several hundreds hosts is hardly feasible though without some sort of mechanization. Mitigation, prioritizing counter-measures subject to a given budget, currently lacks a solid theoretical understanding and is hence more art than science. In this work, we propose the first approach for conducting comprehensive what-if analyses in order to reason about mitigation in a conceptually well-founded manner. To evaluate and compare mitigation strategies, we use simulated penetration testing, i.e., automated attack-finding, based on a network model to which a subset of a given set of mitigation actions, e.g., changes to the network topology, system updates, configuration changes etc. is applied. Using Stackelberg planning, we determine optimal combinations that minimize the maximal attacker success (similar to a Stackelberg game), and thus provide a well-founded basis for a holistic mitigation strategy. We show that these Stackelberg planning models can largely be derived from network scan, public vulnerability databases and manual inspection with various degrees of automation and detail, and we simulate mitigation analysis on networks of different size and vulnerability.
Case-hindering, multi-year digital forensic evidence backlogs have become commonplace in law enforcement agencies throughout the world. This is due to an ever-growing number of cases requiring digital forensic investigation coupled with the growing volume of data to be processed per case. Leveraging previously processed digital forensic cases and their component artefact relevancy classifications can facilitate an opportunity for training automated artificial intelligence based evidence processing systems. These can significantly aid investigators in the discovery and prioritisation of evidence. This paper presents one approach for file artefact relevancy determination building on the growing trend towards a centralised, Digital Forensics as a Service (DFaaS) paradigm. This approach enables the use of previously encountered pertinent files to classify newly discovered files in an investigation. Trained models can aid in the detection of these files during the acquisition stage, i.e., during their upload to a DFaaS system. The technique generates a relevancy score for file similarity using each artefacts filesystem metadata and associated timeline events. The approach presented is validated against three experimental usage scenarios.
Substantial efforts are invested in improving network security, but the threat landscape is rapidly evolving, particularly with the recent interest in programmable network hardware. We explore a new security threat, from an attacker who has gained control of such devices. While it should be obvious that such attackers can trivially cause substantial damage, the challenge and novelty are in doing so while preventing quick diagnosis by the operator. We find that compromised programmable devices can easily degrade networked applications by orders of magnitude, while evading diagnosis by even the most sophisticated network diagnosis methods in deployment. Two key observations yield this result: (a) targeting a small number of packets is often enough to cause disproportionate performance degradation; and (b) new programmable hardware is an effective enabler of careful, selective targeting of packets. Our results also point to recommendations for minimizing the damage from such attacks, ranging from known, easy to implement techniques like encryption and redundant requests, to more complex considerations that would potentially limit some intended uses of programmable hardware. For data center contexts, we also discuss application-aware monitoring and response as a potential mitigation.
Industrial cyber-physical systems (ICPSs) manage critical infrastructures by controlling the processes based on the physics data gathered by edge sensor networks. Recent innovations in ubiquitous computing and communication technologies have prompted the rapid integration of highly interconnected systems to ICPSs. Hence, the security by obscurity principle provided by air-gapping is no longer followed. As the interconnectivity in ICPSs increases, so does the attack surface. Industrial vulnerability assessment reports have shown that a variety of new vulnerabilities have occurred due to this transition while the most common ones are related to weak boundary protection. Although there are existing surveys in this context, very little is mentioned regarding these reports. This paper bridges this gap by defining and reviewing ICPSs from a cybersecurity perspective. In particular, multi-dimensional adaptive attack taxonomy is presented and utilized for evaluating real-life ICPS cyber incidents. We also identify the general shortcomings and highlight the points that cause a gap in existing literature while defining future research directions.
In this work the orbital dynamics of Earth satellites about the geosynchronous altitude are explored, with primary goal to assess current mitigation guidelines as well as to discuss the future exploitation of the region. A thorough dynamical mapping was conducted in a high-definition grid of orbital elements, enabled by a fast and accurate semi-analytical propagator, which considers all the relevant perturbations. The results are presented in appropriately selected stability maps to highlight the underlying mechanisms and their interplay, that can lead to stable graveyard orbits or fast re-entry pathways. The natural separation of the long-term evolution between equatorial and inclined satellites is discussed in terms of post-mission disposal strategies. Moreover, we confirm the existence of an effective cleansing mechanism for inclined geosynchronous satellites and discuss its implications in terms of current guidelines as well as alternative mission designs that could lead to a sustainable use of the geosynchronous orbital region.