Do you want to publish a course? Click here

Two Can Play That Game: An Adversarial Evaluation of a Cyber-alert Inspection System

66   0   0.0 ( 0 )
 Added by Arunesh Sinha
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Cyber-security is an important societal concern. Cyber-attacks have increased in numbers as well as in the extent of damage caused in every attack. Large organizations operate a Cyber Security Operation Center (CSOC), which form the first line of cyber-defense. The inspection of cyber-alerts is a critical part of CSOC operations. A recent work, in collaboration with Army Research Lab, USA proposed a reinforcement learning (RL) based approach to prevent the cyber-alert queue length from growing large and overwhelming the defender. Given the potential deployment of this approach to CSOCs run by US defense agencies, we perform a red team (adversarial) evaluation of this approach. Further, with the recent attacks on learning systems, it is even more important to test the limits of this RL approach. Towards that end, we learn an adversarial alert generation policy that is a best response to the defender inspection policy. Surprisingly, we find the defender policy to be quite robust to the best response of the attacker. In order to explain this observation, we extend the earlier RL model to a game model and show that there exists defender policies that can be robust against any adversarial policy. We also derive a competitive baseline from the game theory model and compare it to the RL approach. However, we go further to exploit assumptions made in the MDP in the RL model and discover an attacker policy that overwhelms the defender. We use a double oracle approach to retrain the defender with episodes from this discovered attacker policy. This made the defender robust to the discovered attacker policy and no further harmful attacker policies were discovered. Overall, the adversarial RL and double oracle approach in RL are general techniques that are applicable to other RL usage in adversarial environments.



rate research

Read More

In this chapter, we present an approach using formal methods to synthesize reactive defense strategy in a cyber network, equipped with a set of decoy systems. We first generalize formal graphical security models--attack graphs--to incorporate defenders countermeasures in a game-theoretic model, called an attack-defend game on graph. This game captures the dynamic interactions between the defender and the attacker and their defense/attack objectives in formal logic. Then, we introduce a class of hypergames to model asymmetric information created by decoys in the attacker-defender interactions. Given qualitative security specifications in formal logic, we show that the solution concepts from hypergames and reactive synthesis in formal methods can be extended to synthesize effective dynamic defense strategy using cyber deception. The strategy takes the advantages of the misperception of the attacker to ensure security specification is satisfied, which may not be satisfiable when the information is symmetric.
Detection of malicious behavior is a fundamental problem in security. One of the major challenges in using detection systems in practice is in dealing with an overwhelming number of alerts that are triggered by normal behavior (the so-called false positives), obscuring alerts resulting from actual malicious activity. While numerous methods for reducing the scope of this issue have been proposed, ultimately one must still decide how to prioritize which alerts to investigate, and most existing prioritization methods are heuristic, for example, based on suspiciousness or priority scores. We introduce a novel approach for computing a policy for prioritizing alerts using adversarial reinforcement learning. Our approach assumes that the attackers know the full state of the detection system and dynamically choose an optimal attack as a function of this state, as well as of the alert prioritization policy. The first step of our approach is to capture the interaction between the defender and attacker in a game theoretic model. To tackle the computational complexity of solving this game to obtain a dynamic stochastic alert prioritization policy, we propose an adversarial reinforcement learning framework. In this framework, we use neural reinforcement learning to compute best response policies for both the defender and the adversary to an arbitrary stochastic policy of the other. We then use these in a double-oracle framework to obtain an approximate equilibrium of the game, which in turn yields a robust stochastic policy for the defender. Extensive experiments using case studies in fraud and intrusion detection demonstrate that our approach is effective in creating robust alert prioritization policies.
Machine learning techniques are currently used extensively for automating various cybersecurity tasks. Most of these techniques utilize supervised learning algorithms that rely on training the algorithm to classify incoming data into different categories, using data encountered in the relevant domain. A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks where a malicious entity called an adversary deliberately alters the training data to misguide the learning algorithm into making classification errors. Adversarial attacks could render the learning algorithm unsuitable to use and leave critical systems vulnerable to cybersecurity attacks. Our paper provides a detailed survey of the state-of-the-art techniques that are used to make a machine learning algorithm robust against adversarial attacks using the computational framework of game theory. We also discuss open problems and challenges and possible directions for further research that would make deep machine learning-based systems more robust and reliable for cybersecurity tasks.
Gamification and Serious Games are progressively being used over a host of fields, particularly to support education. Such games provide a new way to engage students with content and can complement more traditional approaches to learning. This article proposes SherLOCKED, a new serious game created in the style of a 2D top-down puzzle adventure. The game is situated in the context of an undergraduate cyber security course, and is used to consolidate students knowledge of foundational security concepts (e.g. the CIA triad, security threats and attacks and risk management). SherLOCKED was built based on a review of existing serious games and a study of common gamification principles. It was subsequently implemented within an undergraduate course, and evaluated with 112 students. We found the game to be an effective, attractive and fun solution for allowing further engagement with content that students were introduced to during lectures. This research lends additional evidence to the use of serious games in supporting learning about cyber security.
From the masses of planets orbiting our Sun, and relative elemental abundances, it is estimated that at birth our Solar System required a minimum disk mass of ~0.01 solar masses within ~100 AU of the star. The main constituent, gaseous molecular hydrogen, does not emit from the disk mass reservoir, so the most common measure of the disk mass is dust thermal emission and lines of gaseous carbon monoxide. Carbon monoxide emission generally probes the disk surface, while the conversion from dust emission to gas mass requires knowledge of the grain properties and gas-to-dust mass ratio, which likely differ from their interstellar values. Thus, mass estimates vary by orders of magnitude, as exemplified by the relatively old (3--10 Myr) star TW Hya, with estimates ranging from 0.0005 to 0.06 solar masses. Here we report the detection the fundamental rotational transition of hydrogen deuteride, HD, toward TW Hya. HD is a good tracer of disk gas because it follows the distribution of molecular hydrogen and its emission is sensitive to the total mass. The HD detection, combined with existing observations and detailed models, implies a disk mass >0.05 solar masses, enough to form a planetary system like our own.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا