ترغب بنشر مسار تعليمي؟ اضغط هنا

Social learning under inferential attacks

54   0   0.0 ( 0 )
 نشر من قبل Konstantinos Ntemos
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

A common assumption in the social learning literature is that agents exchange information in an unselfish manner. In this work, we consider the scenario where a subset of agents aims at driving the network beliefs to the wrong hypothesis. The adversaries are unaware of the true hypothesis. However, they will blend in by behaving similarly to the other agents and will manipulate the likelihood functions used in the belief update process to launch inferential attacks. We will characterize the conditions under which the network is misled. Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose. We examine both situations in which the agents have minimal or no information about the network model.



قيم البحث

اقرأ أيضاً

We study how to secure distributed filters for linear time-invariant systems with bounded noise under false-data injection attacks. A malicious attacker is able to arbitrarily manipulate the observations for a time-varying and unknown subset of the s ensors. We first propose a recursive distributed filter consisting of two steps at each update. The first step employs a saturation-like scheme, which gives a small gain if the innovation is large corresponding to a potential attack. The second step is a consensus operation of state estimates among neighboring sensors. We prove the estimation error is upper bounded if the filter parameters satisfy a condition. We further analyze the feasibility of the condition and connect it to sparse observability in the centralized case. When the attacked sensor set is known to be time-invariant, the secured filter is modified by adding an online local attack detector. The detector is able to identify the attacked sensors whose observation innovations are larger than the detection thresholds. Also, with more attacked sensors being detected, the thresholds will adaptively adjust to reduce the space of the stealthy attack signals. The resilience of the secured filter with detection is verified by an explicit relationship between the upper bound of the estimation error and the number of detected attacked sensors. Moreover, for the noise-free case, we prove that the state estimate of each sensor asymptotically converges to the system state under certain conditions. Numerical simulations are provided to illustrate the developed results.
In this paper, we study the problem of localizing the sensors positions in presence of denial-of-service (DoS) attacks. We consider a general attack model, in which the attacker action is only constrained through the frequency and duration of DoS att acks. We propose a distributed iterative localization algorithm with an abandonment strategy based on the barycentric coordinate of a sensor with respect to its neighbors, which is computed through relative distance measurements. In particular, if a sensors communication links for receiving its neighbors information lose packets due to DoS attacks, then the sensor abandons the location estimation. When the attacker launches DoS attacks, the AS-DILOC algorithm is proved theoretically to be able to accurately locate the sensors regardless of the attack strategy at each time. The effectiveness of the proposed algorithm is demonstrated through simulation examples.
112 - Tianci Yang , Chen Lv 2021
By using various sensors to measure the surroundings and sharing local sensor information with the surrounding vehicles through wireless networks, connected and automated vehicles (CAVs) are expected to increase safety, efficiency, and capacity of ou r transportation systems. However, the increasing usage of sensors has also increased the vulnerability of CAVs to sensor faults and adversarial attacks. Anomalous sensor values resulting from malicious cyberattacks or faulty sensors may cause severe consequences or even fatalities. In this paper, we increase the resilience of CAVs to faults and attacks by using multiple sensors for measuring the same physical variable to create redundancy. We exploit this redundancy and propose a sensor fusion algorithm for providing a robust estimate of the correct sensor information with bounded errors independent of the attack signals, and for attack detection and isolation. The proposed sensor fusion framework is applicable to a large class of security-critical Cyber-Physical Systems (CPSs). To minimize the performance degradation resulting from the usage of estimation for control, we provide an $H_{infty}$ controller for CACC-equipped CAVs capable of stabilizing the closed-loop dynamics of each vehicle in the platoon while reducing the joint effect of estimation errors and communication channel noise on the tracking performance and string behavior of the vehicle platoon. Numerical examples are presented to illustrate the effectiveness of our methods.
Cooperative Adaptive Cruise Control (CACC) is an autonomous vehicle-following technology that allows groups of vehicles on the highway to form in tightly-coupled platoons. This is accomplished by exchanging inter-vehicle data through Vehicle-to-Vehic le (V2V) wireless communication networks. CACC increases traffic throughput and safety, and decreases fuel consumption. However, the surge of vehicle connectivity has brought new security challenges as vehicular networks increasingly serve as new access points for adversaries trying to deteriorate the platooning performance or even cause collisions. In this manuscript, we propose a novel attack detection scheme that leverage real-time sensor/network data and physics-based mathematical models of vehicles in the platoon. Nevertheless, even the best detection scheme could lead to conservative detection results because of unavoidable modelling uncertainties, network effects (delays, quantization, communication dropouts), and noise. It is hard (often impossible) for any detector to distinguish between these different perturbation sources and actual attack signals. This enables adversaries to launch a range of attack strategies that can surpass the detection scheme by hiding within the system uncertainty. Here, we provide risk assessment tools (in terms of semidefinite programs) for Connected and Automated Vehicles (CAVs) to quantify the potential effect of attacks that remain hidden from the detector (referred here as emph{stealthy attacks}). A numerical case-study is presented to illustrate the effectiveness of our methods.
In this chapter we review some of the basic attack constructions that exploit a stochastic description of the state variables. We pose the state estimation problem in a Bayesian setting and cast the bad data detection procedure as a Bayesian hypothes is testing problem. This revised detection framework provides the benchmark for the attack detection problem that limits the achievable attack disruption. Indeed, the trade-off between the impact of the attack, in terms of disruption to the state estimator, and the probability of attack detection is analytically characterized within this Bayesian attack setting. We then generalize the attack construction by considering information-theoretic measures that place fundamental limits to a broad class of detection, estimation, and learning techniques. Because the attack constructions proposed in this chapter rely on the attacker having access to the statistical structure of the random process describing the state variables, we conclude by studying the impact of imperfect statistics on the attack performance. Specifically, we study the attack performance as a function of the size of the training data set that is available to the attacker to estimate the second-order statistics of the state variables.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا