Do you want to publish a course? Click here

Protecting shared information in networks: a network security game with strategic attacks

75   0   0.0 ( 0 )
 Added by Paolo Frasca
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

A digital security breach, by which confidential information is leaked, does not only affect the agent whose system is infiltrated, but is also detrimental to other agents socially connected to the infiltrated system. Although it has been argued that these externalities create incentives to under-invest in security, this presumption is challenged by the possibility of strategic adversaries that attack the least protected agents. In this paper we study a new model of security games in which agents share tokens of sensitive information in a network of contacts. The agents have the opportunity to invest in security to protect against an attack that can be either strategically or randomly targeted. We show that, in the presence of random attack, under-investments always prevail at the Nash equilibrium in comparison with the social optimum. Instead, when the attack is strategic, either under-investments or over-investments are possible, depending on the network topology and on the characteristics of the process of the spreading of information. Actually, agents invest more in security than socially optimal when dependencies among agents are low (which can happen because the information network is sparsely connected or because the probability that information tokens are shared is small). These over-investments pass on to under-investments when information sharing is more likely (and therefore, when the risk brought by the attack is higher).



rate research

Read More

Transmission of disease, spread of information and rumors, adoption of new products, and many other network phenomena can be fruitfully modeled as cascading processes, where actions chosen by nodes influence the subsequent behavior of neighbors in the network graph. Current literature on cascades tends to assume nodes choose myopically based on the state of choices already taken by other nodes. We examine the possibility of strategic choice, where agents representing nodes anticipate the choices of others who have not yet decided, and take into account their own influence on such choices. Our study employs the framework of Chierichetti et al. [2012], who (under assumption of myopic node behavior) investigate the scheduling of node decisions to promote cascades of product adoptions preferred by the scheduler. We show that when nodes behave strategically, outcomes can be extremely different. We exhibit cases where in the strategic setting 100% of agents adopt, but in the myopic setting only an arbitrarily small epsilon % do. Conversely, we present cases where in the strategic setting 0% of agents adopt, but in the myopic setting (100-epsilon)% do, for any constant epsilon > 0. Additionally, we prove some properties of cascade processes with strategic agents, both in general and for particular classes of graphs.
Internet of Things (IoT) devices and applications can have significant vulnerabilities, which may be exploited by adversaries to cause considerable harm. An important approach for mitigating this threat is remote attestation, which enables the defender to remotely verify the integrity of devices and their software. There are a number of approaches for remote attestation, and each has its unique advantages and disadvantages in terms of detection accuracy and computational cost. Further, an attestation method may be applied in multiple ways, such as various levels of software coverage. Therefore, to minimize both security risks and computational overhead, defenders need to decide strategically which attestation methods to apply and how to apply them, depending on the characteristic of the devices and the potential losses. To answer these questions, we first develop a testbed for remote attestation of IoT devices, which enables us to measure the detection accuracy and performance overhead of various attestation methods. Our testbed integrates two example IoT applications, memory-checksum based attestation, and a variety of software vulnerabilities that allow adversaries to inject arbitrary code into running applications. Second, we model the problem of finding an optimal strategy for applying remote attestation as a Stackelberg security game between a defender and an adversary. We characterize the defenders optimal attestation strategy in a variety of special cases. Finally, building on experimental results from our testbed, we evaluate our model and show that optimal strategic attestation can lead to significantly lower losses than naive baseline strategies.
One prominent security threat that targets unmanned aerial vehicles (UAVs) is the capture via GPS spoofing in which an attacker manipulates a UAVs global positioning system (GPS) signals in order to capture it. Given the anticipated widespread deployment of UAVs for various purposes, it is imperative to develop new security solutions against such attacks. In this paper, a mathematical framework is introduced for analyzing and mitigating the effects of GPS spoofing attacks on UAVs. In particular, system dynamics are used to model the optimal routes that the UAVs will adopt to reach their destinations. The GPS spoofers effect on each UAVs route is also captured by the model. To this end, the spoofers optimal imposed locations on the UAVs, are analytically derived; allowing the UAVs to predict their traveling routes under attack. Then, a countermeasure mechanism is developed to mitigate the effect of the GPS spoofing attack. The countermeasure is built on the premise of cooperative localization, in which a UAV can determine its location using nearby UAVs instead of the possibly compromised GPS locations. To better utilize the proposed defense mechanism, a dynamic Stackelberg game is formulated to model the interactions between a GPS spoofer and a drone operator. In particular, the drone operator acts as the leader that determines its optimal strategy in light of the spoofers expected response strategy. The equilibrium strategies of the game are then analytically characterized and studied through a novel proposed algorithm. Simulation results show that, when combined with the Stackelberg strategies, the proposed defense mechanism will outperform baseline strategy selection techniques in terms of reducing the possibility of UAV capture
We study the process of information dispersal in a network with communication errors and local error-correction. Specifically we consider a simple model where a single bit of information initially known to a single source is dispersed through the network, and communication errors lead to differences in the agents opinions on this information. Naturally, such errors can very quickly make the communication completely unreliable, and in this work we study to what extent this unreliability can be mitigated by local error-correction, where nodes periodically correct their opinion based on the opinion of (some subset of) their neighbors. We analyze how the error spreads in the early stages of information dispersal by monitoring the average opinion, i.e., the fraction of agents that have the correct information among all nodes that hold an opinion at a given time. Our main results show that even with significant effort in error-correction, tiny amounts of noise can lead the average opinion to be nearly uncorrelated with the truth in early stages. We also propose some local methods to help agents gauge when the information they have has stabilized.
Off-chain protocols constitute one of the most promising approaches to solve the inherent scalability issue of blockchain technologies. The core idea is to let parties transact on-chain only once to establish a channel between them, leveraging later on the resulting channel paths to perform arbitrarily many peer-to-peer transactions off-chain. While significant progress has been made in terms of proof techniques for off-chain protocols, existing approaches do not capture the game-theoretic incentives at the core of their design, which led to overlooking significant attack vectors like the Wormhole attack in the past. This work introduces the first game-theoretic model that is expressive enough to reason about the security of off-chain protocols. We advocate the use of Extensive Form Games - EFGs and introduce two instances of EFGs to capture security properties of the closing and the routing of the Lightning Network. Specifically, we model the closing protocol, which relies on punishment mechanisms to disincentivize the uploading on-chain of old channel states, as well as the routing protocol, thereby formally characterizing the Wormhole attack, a vulnerability that undermines the fee-based incentive mechanism underlying the Lightning Network.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا