ترغب بنشر مسار تعليمي؟ اضغط هنا

A Flat Triangular Form for Nonlinear Systems with Two Inputs: Necessary and Sufficient Conditions

124   0   0.0 ( 0 )
 نشر من قبل Pierre Rouchon
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The present work establishes necessary and sufficient conditions for a nonlinear system with two inputs to be described by a specific triangular form. Except for some regularity conditions, such triangular form is flat. This may lead to the discovery of new flat systems. The proof relies on well-known results for driftless systems with two controls (the chained form) and on geometric tools from exterior differential systems. The paper also illustrates the application of its results on an academic example and on a reduced order model of an induction motor.



قيم البحث

اقرأ أيضاً

113 - Weiming Xiang 2021
This paper deals with the stability analysis problem of discrete-time switched linear systems with ranged dwell time. A novel concept called L-switching-cycle is proposed, which contains sequences of multiple activation cycles satisfying the prescrib ed ranged dwell time constraint. Based on L-switching-cycle, two sufficient conditions are proposed to ensure the global uniform asymptotic stability of discrete-time switched linear systems. It is noted that two conditions are equivalent in stability analysis with the same $L$-switching-cycle. These two sufficient conditions can be viewed as generalizations of the clock-dependent Lyapunov and multiple Lyapunov function methods, respectively. Furthermore, it has been proven that the proposed L-switching-cycle can eventually achieve the nonconservativeness in stability analysis as long as a sufficiently long L-switching-cycle is adopted. A numerical example is provided to illustrate our theoretical results.
Convergence of the gradient descent algorithm has been attracting renewed interest due to its utility in deep learning applications. Even as multiple variants of gradient descent were proposed, the assumption that the gradient of the objective is Lip schitz continuous remained an integral part of the analysis until recently. In this work, we look at convergence analysis by focusing on a property that we term as concavifiability, instead of Lipschitz continuity of gradients. We show that concavifiability is a necessary and sufficient condition to satisfy the upper quadratic approximation which is key in proving that the objective function decreases after every gradient descent update. We also show that any gradient Lipschitz function satisfies concavifiability. A constant known as the concavifier analogous to the gradient Lipschitz constant is derived which is indicative of the optimal step size. As an application, we demonstrate the utility of finding the concavifier the in convergence of gradient descent through an example inspired by neural networks. We derive bounds on the concavifier to obtain a fixed step size for a single hidden layer ReLU network.
Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in control theory, machine learning, and discrete geometry. This class of optimization problems, known as rank minimization, is NP-HARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic algorithm replaces the rank function with the nuclear norm--equal to the sum of the singular values--of the decision variable. In this paper, we provide a necessary and sufficient condition that quantifies when this heuristic successfully finds the minimum rank solution of a linear constraint set. We additionally provide a probability distribution over instances of the affine rank minimization problem such that instances sampled from this distribution satisfy our conditions for success with overwhelming probability provided the number of constraints is appropriately large. Finally, we give empirical evidence that these probabilistic bounds provide accurate predictions of the heuristics performance in non-asymptotic scenarios.
297 - Daniel Levy , John C. Duchi 2019
We study the impact of the constraint set and gradient geometry on the convergence of online and stochastic methods for convex optimization, providing a characterization of the geometries for which stochastic gradient and adaptive gradient methods ar e (minimax) optimal. In particular, we show that when the constraint set is quadratically convex, diagonally pre-conditioned stochastic gradient methods are minimax optimal. We further provide a converse that shows that when the constraints are not quadratically convex---for example, any $ell_p$-ball for $p < 2$---the methods are far from optimal. Based on this, we can provide concrete recommendations for when one should use adaptive, mirror or stochastic gradient methods.
This paper is concerned with the stabilization problem of singular fractional order systems with order $alphain(0,2)$. In addition to the sufficient and necessary condition for observer based control, a sufficient and necessary condition for output f eedback control is proposed by adopting matrix variable decoupling technique. The developed results are more general and efficient than the existing works, especially for the output feedback case. Finally, two illustrative examples are given to verify the effectiveness and potential of the proposed approaches.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا