ترغب بنشر مسار تعليمي؟ اضغط هنا

Attacking the V: On the Resiliency of Adaptive-Horizon MPC

59   0   0.0 ( 0 )
 نشر من قبل Junxing Yang
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Scott A. Smolka




اسأل ChatGPT حول البحث

We introduce the concept of a V-formation game between a controller and an attacker, where controllers goal is to maneuver the plant (a simple model of flocking dynamics) into a V-formation, and the goal of the attacker is to prevent the controller from doing so. Controllers in V-formation games utilize a new formulation of model-predictive control we call Adaptive-Horizon MPC (AMPC), giving them extraordinary power: we prove that under certain controllability assumptions, an AMPC controller is able to attain V-formation with probability 1. We define several classes of attackers, including those that in one move can remove R birds from the flock, or introduce random displacement into flock dynamics. We consider both naive attackers, whose strategies are purely probabilistic, and AMPC-enabled attackers, putting them on par strategically with the controllers. While an AMPC-enabled controller is expected to win every game with probability 1, in practice, it is resource-constrained: its maximum prediction horizon and the maximum number of game execution steps are fixed. Under these conditions, an attacker has a much better chance of winning a V-formation game. Our extensive performance evaluation of V-formation games uses statistical model checking to estimate the probability an attacker can thwart the controller. Our results show that for the bird-removal game with R = 1, the controller almost always wins (restores the flock to a V-formation). For R = 2, the game outcome critically depends on which two birds are removed. For the displacement game, our results again demonstrate that an intelligent attacker, i.e. one that uses AMPC in this case, significantly outperforms its naive counterpart that randomly executes its attack.



قيم البحث

اقرأ أيضاً

This paper proposes a reinforcement learning approach for traffic control with the adaptive horizon. To build the controller for the traffic network, a Q-learning-based strategy that controls the green light passing time at the network intersections is applied. The controller includes two components: the regular Q-learning controller that controls the traffic light signal, and the adaptive controller that continuously optimizes the action space for the Q-learning algorithm in order to improve the efficiency of the Q-learning algorithm. The regular Q-learning controller uses the control cost function as a reward function to determine the action to choose. The adaptive controller examines the control cost and updates the action space of the controller by determining the subset of actions that are most likely to obtain optimal results and shrinking the action space to that subset. Uncertainties in traffic influx and turning rate are introduced to test the robustness of the controller under a stochastic environment. Compared with those with model predictive control (MPC), the results show that the proposed Q-learning-based controller outperforms the MPC method by reaching a stable solution in a shorter period and achieves lower control costs. The proposed Q-learning-based controller is also robust under 30% traffic demand uncertainty and 15% turning rate uncertainty.
59 - Chenyuan He , Yan Wan , 2018
The influence model is a discrete-time stochastic model that succinctly captures the interactions of a network of Markov chains. The model produces a reduced-order representation of the stochastic network, and can be used to describe and tractably an alyze probabilistic spatiotemporal spread dynamics, and hence has found broad usage in network applications such as social networks, traffic management, and failure cascades in power systems. This paper provides sufficient and necessary conditions for the identifiability of the influence model, and also develops estimators for the model structure through exploiting the models special properties. In addition, we analyze conditions for the identifiability of the partially observed influence model (POIM), for which not all of the sites can be measured.
This paper presents a new approach to deal with the dual problem of system identification and regulation. The main feature consists of breaking the control input to the system into a regulator part and a persistently exciting part. The former is used to regulate the plant using a robust MPC formulation, in which the latter is treated as a bounded additive disturbance. The identification process is executed by a simple recursive least squares algorithm. In order to guarantee sufficient excitation for the identification, an additional non-convex constraint is enforced over the persistently exciting part.
Distributed averaging is one of the simplest and most studied network dynamics. Its applications range from cooperative inference in sensor networks, to robot formation, to opinion dynamics. A number of fundamental results and examples scattered thro ugh the literature are gathered here and originally presented, emphasizing the deep interplay between the network interconnection structure and the emergent global behavior.
The paper evaluates the influence of the maximum vehicle acceleration and variable proportions of ACC/CACC vehicles on the throughput of an intersection. Two cases are studied: (1) free road downstream of the intersection; and (2) red light at some d istance downstream of the intersection. Simulation of a 4-mile stretch of an arterial with 13 signalized intersections is used to evaluate the impact of (C)ACC vehicles on the mean and standard deviation of travel time as the proportion of (C)ACC vehicles is increased. The results suggest a very high urban mobility benefit of (C)ACC vehicles at little or no cost in infrastructure.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا