ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper studies the problem of steering a linear time-invariant system subject to state and input constraints towards a goal location that may be inferred only through partial observations. We assume mixed-observable settings, where the systems st ate is fully observable and the environments state defining the goal location is only partially observed. In these settings, the planning problem is an infinite-dimensional optimization problem where the objective is to minimize the expected cost. We show how to reformulate the control problem as a finite-dimensional deterministic problem by optimizing over a trajectory tree. Leveraging this result, we demonstrate that when the environment is static, the observation model piecewise, and cost function convex, the original control problem can be reformulated as a Mixed-Integer Convex Program (MICP) that can be solved to global optimality using a branch-and-bound algorithm. The effectiveness of the proposed approach is demonstrated on navigation tasks, where the system has to reach a goal location identified from partial observations.
203 - Ugo Rosolia , Aaron D. Ames 2021
In this paper, we present an iterative Model Predictive Control (MPC) design for piecewise nonlinear systems. We consider finite time control tasks where the goal of the controller is to steer the system from a starting configuration to a goal state while minimizing a cost function. First, we present an algorithm that leverages a feasible trajectory that completes the task to construct a control policy which guarantees that state and input constraints are recursively satisfied and that the closed-loop system reaches the goal state in finite time. Utilizing this construction, we present a policy iteration scheme that iteratively generates safe trajectories which have non-decreasing performance. Finally, we test the proposed strategy on a discretized Spring Loaded Inverted Pendulum (SLIP) model with massless legs. We show that our methodology is robust to changes in initial conditions and disturbances acting on the system. Furthermore, we demonstrate the effectiveness of our policy iteration algorithm in a minimum time control task.
We present a straightforward and efficient way to control unstable robotic systems using an estimated dynamics model. Specifically, we show how to exploit the differentiability of Gaussian Processes to create a state-dependent linearized approximatio n of the true continuous dynamics that can be integrated with model predictive control. Our approach is compatible with most Gaussian process approaches for system identification, and can learn an accurate model using modest amounts of training data. We validate our approach by learning the dynamics of an unstable system such as a segway with a 7-D state space and 2-D input space (using only one minute of data), and we show that the resulting controller is robust to unmodelled dynamics and disturbances, while state-of-the-art control methods based on nominal models can fail under small perturbations. Code is open sourced at https://github.com/learning-and-control/core .
In this paper, we introduce the notion of periodic safety, which requires that the system trajectories periodically visit a subset of a forward-invariant safe set, and utilize it in a multi-rate framework where a high-level planner generates a refere nce trajectory that is tracked by a low-level controller under input constraints. We introduce the notion of fixed-time barrier functions which is leveraged by the proposed low-level controller in a quadratic programming framework. Then, we design a model predictive control policy for high-level planning with a bound on the rate of change for the reference trajectory to guarantee that periodic safety is achieved. We demonstrate the effectiveness of the proposed strategy on a simulation example, where the proposed fixed-time stabilizing low-level controller shows successful satisfaction of control objectives, whereas an exponentially stabilizing low-level controller fails.
Mixed observable Markov decision processes (MOMDPs) are a modeling framework for autonomous systems described by both fully and partially observable states. In this work, we study the problem of synthesizing a control policy for MOMDPs that minimizes the expected time to complete the control task while satisfying syntactically co-safe Linear Temporal Logic (scLTL) specifications. First, we present an exact dynamic programming update to compute the value function. Afterwards, we propose a point-based approximation, which allows us to compute a lower bound of the closed-loop probability of satisfying the specifications. The effectiveness of the proposed approach and comparisons with standard strategies are shown on high-fidelity navigation tasks with partially observable static obstacles.
In this technical note we analyse the performance improvement and optimality properties of the Learning Model Predictive Control (LMPC) strategy for linear deterministic systems. The LMPC framework is a policy iteration scheme where closed-loop traje ctories are used to update the control policy for the next execution of the control task. We show that, when a Linear Independence Constraint Qualification (LICQ) condition holds, the LMPC scheme guarantees strict iterative performance improvement and optimality, meaning that the closed-loop cost evaluated over the entire task converges asymptotically to the optimal cost of the infinite-horizon control problem. Compared to previous works this sufficient LICQ condition can be easily checked, it holds for a larger class of systems and it can be used to adaptively select the prediction horizon of the controller, as demonstrated by a numerical example.
115 - Ugo Rosolia , Aaron D. Ames 2020
In this paper we present a multi-rate control architecture for safety critical systems. We consider a high level planner and a low level controller which operate at different frequencies. This multi-rate behavior is described by a piecewise nonlinear model which evolves on a continuous and a discrete level. First, we present sufficient conditions which guarantee recursive constraint satisfaction for the closed-loop system. Afterwards, we propose a control design methodology which leverages Control Barrier Functions (CBFs) for low level control and Model Predictive Control (MPC) policies for high level planning. The control barrier function is designed using the full nonlinear dynamical model and the MPC is based on a simplified planning model. When the nonlinear system is control affine and the high level planning model is linear, the control actions are computed by solving convex optimization problems at each level of the hierarchy. Finally, we show the effectiveness of the proposed strategy on a simulation example, where the low level control action is updated at a higher frequency than the high level command.
In this paper we present a Learning Model Predictive Control (LMPC) strategy for linear and nonlinear time optimal control problems. Our work builds on existing LMPC methodologies and it guarantees finite time convergence properties for the closed-lo op system. We show how to construct a time varying safe set and terminal cost function using closed-loop data. The resulting LMPC policy is time varying and it guarantees recursive constraint satisfaction and non-decreasing performance. Computational efficiency is obtained by convexifing the safe set and terminal cost function. We demonstrate that, for a class of nonlinear system and convex constraints, the convex LMPC formulation guarantees recursive constraint satisfaction and non-decreasing performance. Finally, we illustrate the effectiveness of the proposed strategies on minimum time obstacle avoidance and racing examples.
428 - Ugo Rosolia , Xiaojing Zhang , 2019
A robust Learning Model Predictive Controller (LMPC) for uncertain systems performing iterative tasks is presented. At each iteration of the control task the closed-loop state, input and cost are stored and used in the controller design. This paper f irst illustrates how to construct robust invariant sets and safe control policies exploiting historical data. Then, we propose an iterative LMPC design procedure, where data generated by a robust controller at iteration $j$ are used to design a robust LMPC at the next $j+1$ iteration. We show that this procedure allows us to iteratively enlarge the domain of the control policy and it guarantees recursive constraints satisfaction, input to state stability and performance bounds for the certainty equivalent closed-loop system. The use of an adaptive prediction horizon is the key element of the proposed design. The effectiveness of the proposed control scheme is illustrated on a linear system subject to bounded additive disturbance.
We present a sample-based Learning Model Predictive Controller (LMPC) for constrained uncertain linear systems subject to bounded additive disturbances. The proposed controller builds on earlier work on LMPC for deterministic systems. First, we intro duce the design of the safe set and value function used to guarantee safety and performance improvement. Afterwards, we show how these quantities can be approximated using noisy historical data. The effectiveness of the proposed approach is demonstrated on a numerical example. We show that the proposed LMPC is able to safely explore the state space and to iteratively improve the worst-case closed-loop performance, while robustly satisfying state and input constraints.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا