ترغب بنشر مسار تعليمي؟ اضغط هنا

While the techniques in optimal control theory are often model-based, the policy optimization (PO) approach can directly optimize the performance metric of interest without explicit dynamical models, and is an essential approach for reinforcement lea rning problems. However, it usually leads to a non-convex optimization problem in most cases, where there is little theoretical understanding on its performance. In this paper, we focus on the risk-constrained Linear Quadratic Regulator (LQR) problem with noisy input via the PO approach, which results in a challenging non-convex problem. To this end, we first build on our earlier result that the optimal policy has an affine structure to show that the associated Lagrangian function is locally gradient dominated with respect to the policy, based on which we establish strong duality. Then, we design policy gradient primal-dual methods with global convergence guarantees to find an optimal policy-multiplier pair in both model-based and sample-based settings. Finally, we use samples of system trajectories in simulations to validate our policy gradient primal-dual methods.
Model-based reinforcement learning (RL), which finds an optimal policy using an empirical model, has long been recognized as one of the corner stones of RL. It is especially suitable for multi-agent RL (MARL), as it naturally decouples the learning a nd the planning phases, and avoids the non-stationarity problem when all agents are improving their policies simultaneously using samples. Though intuitive and widely-used, the sample complexity of model-based MARL algorithms has not been fully investigated. In this paper, our goal is to address the fundamental question about its sample complexity. We study arguably the most basic MARL setting: two-player discounted zero-sum Markov games, given only access to a generative model. We show that model-based MARL achieves a sample complexity of $tilde O(|S||A||B|(1-gamma)^{-3}epsilon^{-2})$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error, and the $epsilon$-NE policies with a smooth planning oracle, where $gamma$ is the discount factor, and $S,A,B$ denote the state space, and the action spaces for the two agents. We further show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge, by establishing a matching lower bound. This is in contrast to the usual reward-aware setting, with a $tildeOmega(|S|(|A|+|B|)(1-gamma)^{-3}epsilon^{-2})$ lower bound, where this model-based approach is near-optimal with only a gap on the $|A|,|B|$ dependence. Our results not only demonstrate the sample-efficiency of this basic model-based approach in MARL, but also elaborate on the fundamental tradeoff between its power (easily handling the more challenging reward-agnostic case) and limitation (less adaptive and suboptimal in $|A|,|B|$), particularly arises in the multi-agent context.
In decentralized stochastic control, standard approaches for sequential decision-making, e.g. dynamic programming, quickly become intractable due to the need to maintain a complex information state. Computational challenges are further compounded if agents do not possess complete model knowledge. In this paper, we take advantage of the fact that in many problems agents share some common information, or history, termed partial history sharing. Under this information structure the policy search space is greatly reduced. We propose a provably convergent, online tree-search based algorithm that does not require a closed-form model or explicit communication among agents. Interestingly, our algorithm can be viewed as a generalization of several existing heuristic solvers for decentralized partially observable Markov decision processes. To demonstrate the applicability of the model, we propose a novel collaborative intrusion response model, where multiple agents (defenders) possessing asymmetric information aim to collaboratively defend a computer network. Numerical results demonstrate the performance of our algorithm.
We propose a new defense mechanism against undetected infiltration into controllers in cyber-physical systems. To this end, we cautiously design the outputs of the sensors that monitor the state of the system. Different from the defense mechanisms th at seek to detect infiltration, the proposed approach seeks to minimize the damage of possible attacks before they have been detected. Controller of a cyber-physical system could have been infiltrated into by an undetected attacker at any time of the operation. Disregarding such a possibility and disclosing systems state without caution benefits the attacker in his/her malicious objective. Therefore, secure sensor design can improve the security of cyber-physical systems further when incorporated along with other defense mechanisms. We, specifically, consider a controlled Gauss-Markov process, where the controller could have been infiltrated into at any time within the systems operation. In the sense of game-theoretic hierarchical equilibrium, we provide a semi-definite programming based algorithm to compute the optimal linear secure sensor outputs and analyze the performance for various scenarios numerically.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا