No Arabic abstract
The prominence and use of the concept of cyber risk has been rising in recent years. This paper presents empirical investigations focused on two important and distinct groups within the broad community of cyber-defense professionals and researchers: (1) cyber practitioners and (2) developers of cyber ontologies. The key finding of this work is that the ways the concept of cyber risk is treated by practitioners of cybersecurity is largely inconsistent with definitions of cyber risk commonly offered in the literature. Contrary to commonly cited definitions of cyber risk, concepts such as the likelihood of an event and the extent of its impact are not used by cybersecurity practitioners. This is also the case for use of these concepts in the current generation of cybersecurity ontologies. Instead, terms and concepts reflective of the adversarial nature of cyber defense appear to take the most prominent roles. This research offers the first quantitative empirical evidence that rejection of traditional concepts of cyber risk by cybersecurity professionals is indeed observed in real-world practice.
802.11 device fingerprinting is the action of characterizing a target device through its wireless traffic. This results in a signature that may be used for identification, network monitoring or intrusion detection. The fingerprinting method can be active by sending traffic to the target device, or passive by just observing the traffic sent by the target device. Many passive fingerprinting methods rely on the observation of one particular network feature, such as the rate switching behavior or the transmission pattern of probe requests. In this work, we evaluate a set of global wireless network parameters with respect to their ability to identify 802.11 devices. We restrict ourselves to parameters that can be observed passively using a standard wireless card. We evaluate these parameters for two different tests: i) the identification test that returns one single result being the closest match for the target device, and ii) the similarity test that returns a set of devices that are close to the target devices. We find that the network parameters transmission time and frame inter-arrival time perform best in comparison to the other network parameters considered. Finally, we focus on inter-arrival times, the most promising parameter for device identification, and show its dependency from several device characteristics such as the wireless card and driver but also running applications.
The increasing value of data held in enterprises makes it an attractive target to attackers. The increasing likelihood and impact of a cyber attack have highlighted the importance of effective cyber risk estimation. We propose two methods for modelling Value-at-Risk (VaR) which can be used for any time-series data. The first approach is based on Quantile Autoregression (QAR), which can estimate VaR for different quantiles, i.e. confidence levels. The second method, we term Competitive Quantile Autoregression (CQAR), dynamically re-estimates cyber risk as soon as new data becomes available. This method provides a theoretical guarantee that it asymptotically performs as well as any QAR at any time point in the future. We show that these methods can predict the size and inter-arrival time of cyber hacking breaches by running coverage tests. The proposed approaches allow to model a separate stochastic process for each significance level and therefore provide more flexibility compared to previously proposed techniques. We provide a fully reproducible code used for conducting the experiments.
In the proof-of-stake (PoS) paradigm for maintaining decentralized, permissionless cryptocurrencies, Sybil attacks are prevented by basing the distribution of roles in the protocol execution on the stake distribution recorded in the ledger itself. However, for various reasons this distribution cannot be completely up-to-date, introducing a gap between the present stake distribution, which determines the parties current incentives, and the one used by the protocol. In this paper, we investigate this issue, and empirically quantify its effects. We survey existing provably secure PoS proposals to observe that the above time gap between the two stake distributions, which we call stake distribution lag, amounts to several days for each of these protocols. Based on this, we investigate the ledgers of four major cryptocurrencies (Bitcoin, Bitcoin Cash, Litecoin and Zcash) and compute the average stake shift (the statistical distance of the two distributions) for each value of stake distribution lag between 1 and 14 days, as well as related statistics. We also empirically quantify the sublinear growth of stake shift with the length of the considered lag interval. Finally, we turn our attention to unusual stake-shift spikes in these currencies: we observe that hard forks trigger major stake shifts and that single real-world actors, mostly exchanges, account for major stake shifts in established cryptocurrency ecosystems.
Some recent incidents have shown that possibly the vulnerability of IT systems in railway automation has been underestimated. Fortunately, so far, almost only denial-of-service attacks were successful, but due to several trends, such as the use of commercial IT and communication systems or privatization, the threat potential could increase in the near future. However, up to now, no harmonized IT security risk assessment framework for railway automation exists. This paper defines an IT security risk assessment framework which aims to separate IT security and safety requirements as well as certification processes as far as possible. It builds on the well-known safety and approval processes from IEC 62425 and integrates IT security requirements based on the ISA99/IEC62443 standard series. While the detailed results are related to railway automation the general concepts are also applicable to other safety-critical application areas.
Within the future Global Information Grid, complex massively interconnected systems, isolated defense vehicles, sensors and effectors, and infrastructures and systems demanding extremely low failure rates, to which human security operators cannot have an easy access and cannot deliver fast enough reactions to cyber-attacks, need an active, autonomous and intelligent cyber defense. Multi Agent Systems for Cyber Defense may provide an answer to this requirement. This paper presents the concept and architecture of an Autonomous Intelligent Cyber defense Agent (AICA). First, we describe the rationale of the AICA concept. Secondly, we explain the methodology and purpose that drive the definition of the AICA Reference Architecture (AICARA) by NATOs IST-152 Research and Technology Group. Thirdly, we review some of the main features and challenges of Multi Autonomous Intelligent Cyber defense Agent (MAICA). Fourthly, we depict the initially assumed AICA Reference Architecture. Then we present one of our preliminary research issues, assumptions and ideas. Finally, we present the future lines of research that will help develop and test the AICA / MAICA concept.