ترغب بنشر مسار تعليمي؟ اضغط هنا

Iterative Linearized Control: Stable Algorithms and Complexity Guarantees

128   0   0.0 ( 0 )
 نشر من قبل Vincent Roulet
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We examine popular gradient-based algorithms for nonlinear control in the light of the modern complexity analysis of first-order optimization algorithms. The examination reveals that the complexity bounds can be clearly stated in terms of calls to a computational oracle related to dynamic programming and implementable by gradient back-propagation using machine learning software libraries such as PyTorch or TensorFlow. Finally, we propose a regularized Gauss-Newton algorithm enjoying worst-case complexity bounds and improved convergence behavior in practice. The software library based on PyTorch is publicly available.



قيم البحث

اقرأ أيضاً

59 - Sihong Shao , Chuan Yang 2021
As a judicious correspondence to the classical maxcut, the anti-Cheeger cut has more balanced structure, but few numerical results on it have been reported so far. In this paper, we propose a continuous iterative algorithm for the anti-Cheeger cut pr oblem through fully using an equivalent continuous formulation. It does not need rounding at all and has advantages that all subproblems have explicit analytic solutions, the objection function values are monotonically updated and the iteration points converge to a local optima in finite steps via an appropriate subgradient selection. It can also be easily combined with the maxcut iterations for breaking out of local optima and improving the solution quality thanks to the similarity between the anti-Cheeger cut problem and the maxcut problem. Numerical experiments on G-set demonstrate the performance.
The fragility of deep neural networks to adversarially-chosen inputs has motivated the need to revisit deep learning algorithms. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. This mechanism can be formulated as a min-max optimization problem, where the adversary seeks to maximize the loss function using an iterative first-order algorithm while the learner attempts to minimize it. However, finding adversarial examples in this way causes excessive computational overhead during training. By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly. In this paper, we provide the first convergence analysis of this adversarial training algorithm by combining techniques from robust optimal control and inexact oracle methods in optimization. Our analysis sheds light on how the hyperparameters of the algorithm affect the its stability and convergence. We support our insights with experiments on a robust classification problem.
Stochastic MPECs have found increasing relevance for modeling a broad range of settings in engineering and statistics. Yet, there seem to be no efficient first/zeroth-order schemes equipped with non-asymptotic rate guarantees for resolving even deter ministic variants of such problems. We consider MPECs where the parametrized lower-level equilibrium problem is given by a deterministic/stochastic VI problem whose mapping is strongly monotone, uniformly in upper-level decisions. We develop a zeroth-order implicit algorithmic framework by leveraging a locally randomized spherical smoothing scheme. We make three sets of contributions: (i) Convex settings. When the implicit problem is convex and the lower-level decision is obtainable by inexactly solving a strongly monotone stochastic VI to compute an $epsilon$-solution, we derive iteration complexity guarantees of $mathcal{O}left(tfrac{L_0^2n^2}{epsilon^2}right)$ (upper-level) and $mathcal{O}left(tfrac{L_0^2 n^2}{epsilon^2} ln(tfrac{L_0 n}{epsilon})right)$ (lower-level); (ii) Exact oracles and accelerated schemes. When the lower-level problem can be resolved exactly, employing accelerated schemes, the complexity improves to $mathcal{O}(tfrac{1}{epsilon})$ and $mathcal{O}(tfrac{1}{epsilon^{2+delta}})$, respectively. Notably, this guarantee extends to stochastic MPECs with equilibrium constraints imposed in an almost sure sense; (iii) Nonconvex regimes. When the implicit problem is not necessarily convex and the lower-level problem can be inexactly resolved via a stochastic approximation framework, computing an $epsilon$-stationary point is equipped with complexity bounds of $mathcal{O}left(tfrac{L_0^2n^2}{epsilon}right)$ (upper-level) and $mathcal{O}left(tfrac{L_0^6n^6}{epsilon^3}right)$ (lower-level). We also provide numerical results for validating the theoretical findings in this work.
When are two algorithms the same? How can we be sure a recently proposed algorithm is novel, and not a minor twist on an existing method? In this paper, we present a framework for reasoning about equivalence between a broad class of iterative algorit hms, with a focus on algorithms designed for convex optimization. We propose several notions of what it means for two algorithms to be equivalent, and provide computationally tractable means to detect equivalence. Our main definition, oracle equivalence, states that two algorithms are equivalent if they result in the same sequence of calls to the function oracles (for suitable initialization). Borrowing from control theory, we use state-space realizations to represent algorithms and characterize algorithm equivalence via transfer functions. Our framework can also identify and characterize some algorithm transformations including permutations of the update equations, repetition of the iteration, and conjugation of some of the function oracles in the algorithm. To support the paper, we have developed a software package named Linnaeus that implements the framework to identify other iterative algorithms that are equivalent to an input algorithm. More broadly, this framework and software advances the goal of making mathematics searchable.
We consider the problem of distributed secondary frequency regulation in power networks such that stability and an optimal power allocation are attained. This is a problem that has been widely studied in the literature, and two main control schemes h ave been proposed, usually referred to as primal-dual and distributed averaging proportional-integral (DAPI) respectively. However, each has its limitations, with the former requiring knowledge of uncontrollable demand, which can be difficult to obtain in real time, and with the existing literature on the latter being based on static models for generation and demand. We propose a novel control scheme that overcomes these issues by making use of generation measurements in the control policy. In particular, our analysis allows distributed stability and optimality guarantees to be deduced with practical measurement requirements and permits a broad range of linear generation dynamics, that can be of higher order, to be incorporated in the power network. We show how the controller parameters can be selected in a computationally efficient way by solving appropriate linear matrix inequalities (LMIs). Furthermore, we demonstrate how the proposed analysis applies to several examples of turbine governor models. The practicality of our analysis is demonstrated with simulations on the Northeast Power Coordinating Council (NPCC) 140-bus system that verify that our proposed controller achieves convergence to the nominal frequency and an economically optimal power allocation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا