No Arabic abstract
Trajectory optimization of a controlled dynamical system is an essential part of autonomy, however many trajectory optimization techniques are limited by the fidelity of the underlying parametric model. In the field of robotics, a lack of model knowledge can be overcome with machine learning techniques, utilizing measurements to build a dynamical model from the data. This paper aims to take the middle ground between these two approaches by introducing a semi-parametric representation of the underlying system dynamics. Our goal is to leverage the considerable information contained in a traditional physics based model and combine it with a data-driven, non-parametric regression technique known as a Gaussian Process. Integrating this semi-parametric model with model predictive pseudospectral control, we demonstrate this technique on both a cart pole and quadrotor simulation with unmodeled damping and parametric error. In order to manage parametric uncertainty, we introduce an algorithm that utilizes Sparse Spectrum Gaussian Processes (SSGP) for online learning after each rollout. We implement this online learning technique on a cart pole and quadrator, then demonstrate the use of online learning and obstacle avoidance for the dubin vehicle dynamics.
When multiple model predictive controllers are implemented on a shared control area network (CAN), their performance may degrade due to the inhomogeneous timing and delays among messages. The priority based real-time scheduling of messages on the CAN introduces complex timing of events, especially when the types and number of messages change at runtime. This paper introduces a novel hybrid timing model to make runtime predictions on the timing of the messages for a finite time window. Controllers can be designed using the optimization algorithms for model predictive control by considering the timing as optimization constraints. This timing model allows multiple controllers to share a CAN without significant degradation in the controller performance. The timing model also provides a convenient way to check the schedulability of messages on the CAN at runtime. Simulation results demonstrate that the timing model is accurate and computationally efficient to meet the needs of real-time implementation. Simulation results also demonstrate that model predictive controllers designed when considering the timing constraints have superior performance than the controllers designed without considering the timing constraints.
A stochastic model predictive control (SMPC) approach is presented for discrete-time linear systems with arbitrary time-invariant probabilistic uncertainties and additive Gaussian process noise. Closed-loop stability of the SMPC approach is established by appropriate selection of the cost function. Polynomial chaos is used for uncertainty propagation through system dynamics. The performance of the SMPC approach is demonstrated using the Van de Vusse reactions.
We present an algorithm for controlling and scheduling multiple linear time-invariant processes on a shared bandwidth limited communication network using adaptive sampling intervals. The controller is centralized and computes at every sampling instant not only the new control command for a process, but also decides the time interval to wait until taking the next sample. The approach relies on model predictive control ideas, where the cost function penalizes the state and control effort as well as the time interval until the next sample is taken. The latter is introduced in order to generate an adaptive sampling scheme for the overall system such that the sampling time increases as the norm of the system state goes to zero. The paper presents a method for synthesizing such a predictive controller and gives explicit sufficient conditions for when it is stabilizing. Further explicit conditions are given which guarantee conflict free transmissions on the network. It is shown that the optimization problem may be solved off-line and that the controller can be implemented as a lookup table of state feedback gains. Simulation studies which compare the proposed algorithm to periodic sampling illustrate potential performance gains.
A new distributed MPC algorithm for the regulation of dynamically coupled subsystems is presented in this paper. The current control action is computed via two robust controllers working in a nested fashion. The inner controller builds a nominal reference trajectory from a decentralized perspective. The outer controller uses this information to take into account the effects of the coupling and generate a distributed control action. The tube-based approach to robustness is employed. A supplementary constraint is included in the outer optimization problem to provide recursive feasibility of the overall controller
This paper presents a stochastic, model predictive control (MPC) algorithm that leverages short-term probabilistic forecasts for dispatching and rebalancing Autonomous Mobility-on-Demand systems (AMoD, i.e. fleets of self-driving vehicles). We first present the core stochastic optimization problem in terms of a time-expanded network flow model. Then, to ameliorate its tractability, we present two key relaxations. First, we replace the original stochastic problem with a Sample Average Approximation (SAA), and characterize the performance guarantees. Second, we separate the controller into two separate parts to address the task of assigning vehicles to the outstanding customers separate from that of rebalancing. This enables the problem to be solved as two totally unimodular linear programs, and thus easily scalable to large problem sizes. Finally, we test the proposed algorithm in two scenarios based on real data and show that it outperforms prior state-of-the-art algorithms. In particular, in a simulation using customer data from DiDi Chuxing, the algorithm presented here exhibits a 62.3 percent reduction in customer waiting time compared to state of the art non-stochastic algorithms.