ترغب بنشر مسار تعليمي؟ اضغط هنا

$mathcal{N}$IPM-MPC: An Efficient Null-Space Method Based Interior-Point Method for Model Predictive Control

80   0   0.0 ( 0 )
 نشر من قبل Kai Pfeiffer
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Linear Model Predictive Control (MPC) is a widely used method to control systems with linear dynamics. Efficient interior-point methods have been proposed which leverage the block diagonal structure of the quadratic program (QP) resulting from the receding horizon control formulation. However, they require two matrix factorizations per interior-point method iteration, one each for the computation of the dual and the primal. Recently though an interior point method based on the null-space method has been proposed which requires only a single decomposition per iteration. While the then used null-space basis leads to dense null-space projections, in this work we propose a sparse null-space basis which preserves the block diagonal structure of the MPC matrices. Since it is based on the inverse of the transfer matrix we introduce the notion of so-called virtual controls which enables just that invertibility. A combination of the reduced number of factorizations and omission of the evaluation of the dual lets our solver outperform others in terms of computational speed by an increasing margin dependent on the number of state and control variables.

قيم البحث

اقرأ أيضاً

Move blocking (MB) is a widely used strategy to reduce the degrees of freedom of the Optimal Control Problem (OCP) arising in receding horizon control. The size of the OCP is reduced by forcing the input variables to be constant over multiple discret ization steps. In this paper, we focus on developing computationally efficient MB schemes for multiple shooting based nonlinear model predictive control (NMPC). The degrees of freedom of the OCP is reduced by introducing MB in the shooting step, resulting in a smaller but sparse OCP. Therefore, the discretization accuracy and level of sparsity is maintained. A condensing algorithm that exploits the sparsity structure of the OCP is proposed, that allows to reduce the computation complexity of condensing from quadratic to linear in the number of discretization nodes. As a result, active-set methods with warm-start strategy can be efficiently employed, thus allowing the use of a longer prediction horizon. A detailed comparison between the proposed scheme and the nonuniform grid NMPC is given. Effectiveness of the algorithm in reducing computational burden while maintaining optimization accuracy and constraints fulfillment is shown by means of simulations with two different problems.
Bi-level optimization model is able to capture a wide range of complex learning tasks with practical interest. Due to the witnessed efficiency in solving bi-level programs, gradient-based methods have gained popularity in the machine learning communi ty. In this work, we propose a new gradient-based solution scheme, namely, the Bi-level Value-Function-based Interior-point Method (BVFIM). Following the main idea of the log-barrier interior-point scheme, we penalize the regularized value function of the lower level problem into the upper level objective. By further solving a sequence of differentiable unconstrained approximation problems, we consequently derive a sequential programming scheme. The numerical advantage of our scheme relies on the fact that, when gradient methods are applied to solve the approximation problem, we successfully avoid computing any expensive Hessian-vector or Jacobian-vector product. We prove the convergence without requiring any convexity assumption on either the upper level or the lower level objective. Experiments demonstrate the efficiency of the proposed BVFIM on non-convex bi-level problems.
118 - Hongpeng Sun 2020
Total generalization variation (TGV) is a very powerful and important regularization for various inverse problems and computer vision tasks. In this paper, we proposed a semismooth Newton based augmented Lagrangian method to solve this problem. The a ugmented Lagrangian method (also called as method of multipliers) is widely used for lots of smooth or nonsmooth variational problems. However, its efficiency usually heavily depends on solving the coupled and nonlinear system together and simultaneously, which is very complicated and highly coupled for total generalization variation. With efficient primal-dual semismooth Newton methods for the complicated linear subproblems involving total generalized variation, we investigated a highly efficient and competitive algorithm compared to some efficient first-order method. With the analysis of the metric subregularities of the corresponding functions, we give both the global convergence and local linear convergence rate for the proposed augmented Lagrangian methods.
In linear optimization, matrix structure can often be exploited algorithmically. However, beneficial presolving reductions sometimes destroy the special structure of a given problem. In this article, we discuss structure-aware implementations of pres olving as part of a parallel interior-point method to solve linear programs with block-diagonal structure, including both linking variables and linking constraints. While presolving reductions are often mathematically simple, their implementation in a high-performance computing environment is a complex endeavor. We report results on impact, performance, and scalability of the resulting presolving routines on real-world energy system models with up to 700 million nonzero entries in the constraint matrix.
In this chapter, we present some recent progresses on the numerics for stochastic distributed parameter control systems, based on the emph{finite transposition method} introduced in our previous works. We first explain how to reduce the numerics of s ome stochastic control problems in this respect to the numerics of backward stochastic evolution equations. Then we present a method to find finite transposition solutions to such equations. At last, we give an illuminating example.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا