ترغب بنشر مسار تعليمي؟ اضغط هنا

A Sequential Learning Algorithm for Probabilistically Robust Controller Tuning

76   0   0.0 ( 0 )
 نشر من قبل Robert Chin
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we introduce a sequential learning algorithm to address a probabilistically robust controller tuning problem. The algorithm leverages ideas from the areas of randomised algorithms and ordinal optimisation, which have both been proposed to find approximate solutions for difficult design problems in control. We formally prove that our algorithm yields a controller which meets a specified probabilisitic performance specification, assuming a Gaussian or near-Gaussian copula model for the controller performances. Additionally, we are able to characterise the computational requirement of the algorithm by using a lower bound on the distribution function of the algorithms stopping time. To validate our work, the algorithm is then demonstrated for the purpose of tuning model predictive controllers on a diesel engine air-path. It is shown that the algorithm is able to successfully tune a single controller to meet a desired performance threshold, even in the presence of uncertainty in the diesel engine model, that is inherent when a single representation is used across a fleet of vehicles.


قيم البحث

اقرأ أيضاً

This paper proposes a controller for stable grasping of unknown-shaped objects by two robotic fingers with tactile fingertips. The grasp is stabilised by rolling the fingertips on the contact surface and applying a desired grasping force to reach an equilibrium state. The validation is both in simulation and on a fully-actuated robot hand (the Shadow Modular Grasper) fitted with custom-built optical tactile sensors (based on the BRL TacTip). The controller requires the orientations of the contact surfaces, which are estimated by regressing a deep convolutional neural network over the tactile images. Overall, the grasp system is demonstrated to achieve stable equilibrium poses on various objects ranging in shape and softness, with the system being robust to perturbations and measurement errors. This approach also has promise to extend beyond grasping to stable in-hand object manipulation with multiple fingers.
We present a data-driven model predictive control (MPC) scheme for chance-constrained Markov jump systems with unknown switching probabilities. Using samples of the underlying Markov chain, ambiguity sets of transition probabilities are estimated whi ch include the true conditional probability distributions with high probability. These sets are updated online and used to formulate a time-varying, risk-averse optimal control problem. We prove recursive feasibility of the resulting MPC scheme and show that the original chance constraints remain satisfied at every time step. Furthermore, we show that under sufficient decrease of the confidence levels, the resulting MPC scheme renders the closed-loop system mean-square stable with respect to the true-but-unknown distributions, while remaining less conservative than a fully robust approach. Finally, we show that the data-driven value function converges to its nominal counterpart as the sample size grows to infinity. We illustrate our approach on a numerical example.
In this technical note we analyse the performance improvement and optimality properties of the Learning Model Predictive Control (LMPC) strategy for linear deterministic systems. The LMPC framework is a policy iteration scheme where closed-loop traje ctories are used to update the control policy for the next execution of the control task. We show that, when a Linear Independence Constraint Qualification (LICQ) condition holds, the LMPC scheme guarantees strict iterative performance improvement and optimality, meaning that the closed-loop cost evaluated over the entire task converges asymptotically to the optimal cost of the infinite-horizon control problem. Compared to previous works this sufficient LICQ condition can be easily checked, it holds for a larger class of systems and it can be used to adaptively select the prediction horizon of the controller, as demonstrated by a numerical example.
This paper proposes a data-driven control framework to regulate an unknown, stochastic linear dynamical system to the solution of a (stochastic) convex optimization problem. Despite the centrality of this problem, most of the available methods critic ally rely on a precise knowledge of the system dynamics (thus requiring off-line system identification and model refinement). To this aim, in this paper we first show that the steady-state transfer function of a linear system can be computed directly from control experiments, bypassing explicit model identification. Then, we leverage the estimated transfer function to design a controller -- which is inspired by stochastic gradient descent methods -- that regulates the system to the solution of the prescribed optimization problem. A distinguishing feature of our methods is that they do not require any knowledge of the system dynamics, disturbance terms, or their distributions. Our technical analysis combines concepts and tools from behavioral system theory, stochastic optimization with decision-dependent distributions, and stability analysis. We illustrate the applicability of the framework on a case study for mobility-on-demand ride service scheduling in Manhattan, NY.
In this paper, we introduce a proximal-proximal majorization-minimization (PPMM) algorithm for nonconvex tuning-free robust regression problems. The basic idea is to apply the proximal majorization-minimization algorithm to solve the nonconvex proble m with the inner subproblems solved by a sparse semismooth Newton (SSN) method based proximal point algorithm (PPA). We must emphasize that the main difficulty in the design of the algorithm lies in how to overcome the singular difficulty of the inner subproblem. Furthermore, we also prove that the PPMM algorithm converges to a d-stationary point. Due to the Kurdyka-Lojasiewicz (KL) property of the problem, we present the convergence rate of the PPMM algorithm. Numerical experiments demonstrate that our proposed algorithm outperforms the existing state-of-the-art algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا