ترغب بنشر مسار تعليمي؟ اضغط هنا

A Theory of Hypergames on Graphs for Synthesizing Dynamic Cyber Defense with Deception

61   0   0.0 ( 0 )
 نشر من قبل Abhishek Kulkarni
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this chapter, we present an approach using formal methods to synthesize reactive defense strategy in a cyber network, equipped with a set of decoy systems. We first generalize formal graphical security models--attack graphs--to incorporate defenders countermeasures in a game-theoretic model, called an attack-defend game on graph. This game captures the dynamic interactions between the defender and the attacker and their defense/attack objectives in formal logic. Then, we introduce a class of hypergames to model asymmetric information created by decoys in the attacker-defender interactions. Given qualitative security specifications in formal logic, we show that the solution concepts from hypergames and reactive synthesis in formal methods can be extended to synthesize effective dynamic defense strategy using cyber deception. The strategy takes the advantages of the misperception of the attacker to ensure security specification is satisfied, which may not be satisfiable when the information is symmetric.


قيم البحث

اقرأ أيضاً

We introduce deceptive signaling framework as a new defense measure against advanced adversaries in cyber-physical systems. In general, adversaries look for system-related information, e.g., the underlying state of the system, in order to learn the s ystem dynamics and to receive useful feedback regarding the success/failure of their actions so as to carry out their malicious task. To this end, we craft the information that is accessible to adversaries strategically in order to control their actions in a way that will benefit the system, indirectly and without any explicit enforcement. Under the solution concept of game-theoretic hierarchical equilibrium, we arrive at a semi-definite programming problem equivalent to the infinite-dimensional optimization problem faced by the defender while selecting the best strategy when the information of interest is Gaussian and both sides have quadratic cost functions. The equivalence result holds also for the scenarios where the defender can have partial or noisy measurements or the objective of the adversary is not known. We show the optimality of linear signaling rule within the general class of measurable policies in communication scenarios and also compute the optimal linear signaling rule in control scenarios.
Cyber-security is an important societal concern. Cyber-attacks have increased in numbers as well as in the extent of damage caused in every attack. Large organizations operate a Cyber Security Operation Center (CSOC), which form the first line of cyb er-defense. The inspection of cyber-alerts is a critical part of CSOC operations. A recent work, in collaboration with Army Research Lab, USA proposed a reinforcement learning (RL) based approach to prevent the cyber-alert queue length from growing large and overwhelming the defender. Given the potential deployment of this approach to CSOCs run by US defense agencies, we perform a red team (adversarial) evaluation of this approach. Further, with the recent attacks on learning systems, it is even more important to test the limits of this RL approach. Towards that end, we learn an adversarial alert generation policy that is a best response to the defender inspection policy. Surprisingly, we find the defender policy to be quite robust to the best response of the attacker. In order to explain this observation, we extend the earlier RL model to a game model and show that there exists defender policies that can be robust against any adversarial policy. We also derive a competitive baseline from the game theory model and compare it to the RL approach. However, we go further to exploit assumptions made in the MDP in the RL model and discover an attacker policy that overwhelms the defender. We use a double oracle approach to retrain the defender with episodes from this discovered attacker policy. This made the defender robust to the discovered attacker policy and no further harmful attacker policies were discovered. Overall, the adversarial RL and double oracle approach in RL are general techniques that are applicable to other RL usage in adversarial environments.
Cyber deception has recently received increasing attentions as a promising mechanism for proactive cyber defense. Cyber deception strategies aim at injecting intentionally falsified information to sabotage the early stage of attack reconnaissance and planning in order to render the final attack action harmless or ineffective. Motivated by recent advances in cyber deception research, we in this paper provide a formal view of cyber deception, and review high-level deception schemes and actions. We also summarize and classify recent research results of cyber defense techniques built upon the concept of cyber deception, including game-theoretic modeling at the strategic level, network-level deception, in-host-system deception and cryptography based deception. Finally, we lay out and discuss in detail the research challenges towards developing full-fledged cyber deception frameworks and mechanisms.
The increasing value of data held in enterprises makes it an attractive target to attackers. The increasing likelihood and impact of a cyber attack have highlighted the importance of effective cyber risk estimation. We propose two methods for modelli ng Value-at-Risk (VaR) which can be used for any time-series data. The first approach is based on Quantile Autoregression (QAR), which can estimate VaR for different quantiles, i.e. confidence levels. The second method, we term Competitive Quantile Autoregression (CQAR), dynamically re-estimates cyber risk as soon as new data becomes available. This method provides a theoretical guarantee that it asymptotically performs as well as any QAR at any time point in the future. We show that these methods can predict the size and inter-arrival time of cyber hacking breaches by running coverage tests. The proposed approaches allow to model a separate stochastic process for each significance level and therefore provide more flexibility compared to previously proposed techniques. We provide a fully reproducible code used for conducting the experiments.
Cybersecurity tools are increasingly automated with artificial intelligent (AI) capabilities to match the exponential scale of attacks, compensate for the relatively slower rate of training new cybersecurity talents, and improve of the accuracy and p erformance of both tools and users. However, the safe and appropriate usage of autonomous cyber attack tools - especially at the development stages for these tools - is still largely an unaddressed gap. Our survey of current literature and tools showed that most of the existing cyber range designs are mostly using manual tools and have not considered augmenting automated tools or the potential security issues caused by the tools. In other words, there is still room for a novel cyber range design which allow security researchers to safely deploy autonomous tools and perform automated tool testing if needed. In this paper, we introduce Pandora, a safe testing environment which allows security researchers and cyber range users to perform experiments on automated cyber attack tools that may have strong potential of usage and at the same time, a strong potential for risks. Unlike existing testbeds and cyber ranges which have direct compatibility with enterprise computer systems and the potential for risk propagation across the enterprise network, our test system is intentionally designed to be incompatible with enterprise real-world computing systems to reduce the risk of attack propagation into actual infrastructure. Our design also provides a tool to convert in-development automated cyber attack tools into to executable test binaries for validation and usage realistic enterprise system environments if required. Our experiments tested automated attack tools on our proposed system to validate the usability of our proposed environment. Our experiments also proved the safety of our environment by compatibility testing using simple malicious code.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا