No Arabic abstract
Insider threats are one of todays most challenging cybersecurity issues that are not well addressed by commonly employed security solutions. Despite several scientific works published in this domain, we argue that the field can benefit from the proposed structural taxonomy and novel categorization of research that contribute to the organization and disambiguation of insider threat incidents and the defense solutions used against them. The objective of our categorization is to systematize knowledge in insider threat research, while leveraging existing grounded theory method for rigorous literature review. The proposed categorization depicts the workflow among particular categories that include: 1) Incidents and datasets, 2) Analysis of attackers, 3) Simulations, and 4) Defense solutions. Special attention is paid to the definitions and taxonomies of the insider threat; we present a structural taxonomy of insider threat incidents, which is based on existing taxonomies and the 5W1H questions of the information gathering problem. Our survey will enhance researchers efforts in the domain of insider threat, because it provides: a) a novel structural taxonomy that contributes to orthogonal classification of incidents and defining the scope of defense solutions employed against them, b) an updated overview on publicly available datasets that can be used to test new detection solutions against other works, c) references of existing case studies and frameworks modeling insiders behaviors for the purpose of reviewing defense solutions or extending their coverage, and d) a discussion of existing trends and further research directions that can be used for reasoning in the insider threat domain.
Insider threats entail major security issues in geopolitics, cyber risk management and business organization. The game theoretic models proposed so far do not take into account some important factors such as the organisational culture and whether the attacker was detected or not. They also fail to model the defensive mechanisms already put in place by an organisation to mitigate an insider attack. We propose two new models which incorporate these settings and hence are more realistic. %Most earlier work in the field has focused on %standard game theoretic approaches to find the solutions. We use the adversarial risk analysis (ARA) approach to find the solution to our models. ARA does not assume common knowledge and solves the problem from the point of view of one of the players, taking into account their knowledge and uncertainties regarding the choices available to them, to their adversaries, the possible outcomes, their utilities and their opponents utilities. Our models and the ARA solutions are general and can be applied to most insider threat scenarios. A data security example illustrates the discussion.
Insider threat detection has been a challenging task over decades, existing approaches generally employ the traditional generative unsupervised learning methods to produce normal user behavior model and detect significant deviations as anomalies. However, such approaches are insufficient in precision and computational complexity. In this paper, we propose a novel insider threat detection method, Image-based Insider Threat Detector via Geometric Transformation (IGT), which converts the unsupervised anomaly detection into supervised image classification task, and therefore the performance can be boosted via computer vision techniques. To illustrate, our IGT uses a novel image-based feature representation of user behavior by transforming audit logs into grayscale images. By applying multiple geometric transformations on these behavior grayscale images, IGT constructs a self-labelled dataset and then train a behavior classifier to detect anomaly in self-supervised manner. The motivation behind our proposed method is that images converted from normal behavior data may contain unique latent features which keep unchanged after geometric transformation, while malicious ones cannot. Experimental results on CERT dataset show IGT outperforms the classical autoencoder-based unsupervised insider threat detection approaches, and improves the instance and user based Area under the Receiver Operating Characteristic Curve (AUROC) by 4% and 2%, respectively.
Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. This survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specifically, we list and review the different protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-Things (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.
Cyber-physical systems (CPS) are interconnected architectures that employ analog, digital, and communication resources for their interaction with the physical environment. CPS are the backbone of enterprise, industrial, and critical infrastructure. Thus, their vital importance makes them prominent targets for malicious attacks aiming to disrupt their operations. Attacks targeting cyber-physical energy systems (CPES), given their mission-critical nature, can have disastrous consequences. The security of CPES can be enhanced leveraging testbed capabilities to replicate power system operations, discover vulnerabilities, develop security countermeasures, and evaluate grid operation under fault-induced or maliciously constructed scenarios. In this paper, we provide a comprehensive overview of the CPS security landscape with emphasis on CPES. Specifically, we demonstrate a threat modeling methodology to accurately represent the CPS elements, their interdependencies, as well as the possible attack entry points and system vulnerabilities. Leveraging the threat model formulation, we present a CPS framework designed to delineate the hardware, software, and modeling resources required to simulate the CPS and construct high-fidelity models which can be used to evaluate the systems performance under adverse scenarios. The system performance is assessed using scenario-specific metrics, while risk assessment enables system vulnerability prioritization factoring the impact on the system operation. The overarching framework for modeling, simulating, assessing, and mitigating attacks in a CPS is illustrated using four representative attack scenarios targeting CPES. The key objective of this paper is to demonstrate a step-by-step process that can be used to enact in-depth cybersecurity analyses, thus leading to more resilient and secure CPS.
Increasing numbers of mobile computing devices, user-portable, or embedded in vehicles, cargo containers, or the physical space, need to be aware of their location in order to provide a wide range of commercial services. Most often, mobile devices obtain their own location with the help of Global Navigation Satellite Systems (GNSS), integrating, for example, a Global Positioning System (GPS) receiver. Nonetheless, an adversary can compromise location-aware applications by attacking the GNSS-based positioning: It can forge navigation messages and mislead the receiver into calculating a fake location. In this paper, we analyze this vulnerability and propose and evaluate the effectiveness of countermeasures. First, we consider replay attacks, which can be effective even in the presence of future cryptographic GNSS protection mechanisms. Then, we propose and analyze methods that allow GNSS receivers to detect the reception of signals generated by an adversary, and then reject fake locations calculated because of the attack. We consider three diverse defense mechanisms, all based on knowledge, in particular, own location, time, and Doppler shift, receivers can obtain prior to the onset of an attack. We find that inertial mechanisms that estimate location can be defeated relatively easy. This is equally true for the mechanism that relies on clock readings from off-the-shelf devices; as a result, highly stable clocks could be needed. On the other hand, our Doppler Shift Test can be effective without any specialized hardware, and it can be applied to existing devices.