ﻻ يوجد ملخص باللغة العربية
We introduce the concept of a V-formation game between a controller and an attacker, where controllers goal is to maneuver the plant (a simple model of flocking dynamics) into a V-formation, and the goal of the attacker is to prevent the controller from doing so. Controllers in V-formation games utilize a new formulation of model-predictive control we call Adaptive-Horizon MPC (AMPC), giving them extraordinary power: we prove that under certain controllability assumptions, an AMPC controller is able to attain V-formation with probability 1. We define several classes of attackers, including those that in one move can remove R birds from the flock, or introduce random displacement into flock dynamics. We consider both naive attackers, whose strategies are purely probabilistic, and AMPC-enabled attackers, putting them on par strategically with the controllers. While an AMPC-enabled controller is expected to win every game with probability 1, in practice, it is resource-constrained: its maximum prediction horizon and the maximum number of game execution steps are fixed. Under these conditions, an attacker has a much better chance of winning a V-formation game. Our extensive performance evaluation of V-formation games uses statistical model checking to estimate the probability an attacker can thwart the controller. Our results show that for the bird-removal game with R = 1, the controller almost always wins (restores the flock to a V-formation). For R = 2, the game outcome critically depends on which two birds are removed. For the displacement game, our results again demonstrate that an intelligent attacker, i.e. one that uses AMPC in this case, significantly outperforms its naive counterpart that randomly executes its attack.
This paper proposes a reinforcement learning approach for traffic control with the adaptive horizon. To build the controller for the traffic network, a Q-learning-based strategy that controls the green light passing time at the network intersections
The influence model is a discrete-time stochastic model that succinctly captures the interactions of a network of Markov chains. The model produces a reduced-order representation of the stochastic network, and can be used to describe and tractably an
This paper presents a new approach to deal with the dual problem of system identification and regulation. The main feature consists of breaking the control input to the system into a regulator part and a persistently exciting part. The former is used
Distributed averaging is one of the simplest and most studied network dynamics. Its applications range from cooperative inference in sensor networks, to robot formation, to opinion dynamics. A number of fundamental results and examples scattered thro
The paper evaluates the influence of the maximum vehicle acceleration and variable proportions of ACC/CACC vehicles on the throughput of an intersection. Two cases are studied: (1) free road downstream of the intersection; and (2) red light at some d