No Arabic abstract
Mixed monotone systems form an important class of nonlinear systems that have recently received attention in the abstraction-based control design area. Slightly different definitions exist in the literature, and it remains a challenge to verify mixed monotonicity of a system in general. In this paper, we first clarify the relation between different existing definitions of mixed monotone systems, and then give two sufficient conditions for mixed monotone functions defined on Euclidean space. These sufficient conditions are more general than the ones from the existing control literature, and they suggest that mixed monotonicity is a very generic property. Some discussions are provided on the computational usefulness of the proposed sufficient conditions.
This paper deals with the stability analysis problem of discrete-time switched linear systems with ranged dwell time. A novel concept called L-switching-cycle is proposed, which contains sequences of multiple activation cycles satisfying the prescribed ranged dwell time constraint. Based on L-switching-cycle, two sufficient conditions are proposed to ensure the global uniform asymptotic stability of discrete-time switched linear systems. It is noted that two conditions are equivalent in stability analysis with the same $L$-switching-cycle. These two sufficient conditions can be viewed as generalizations of the clock-dependent Lyapunov and multiple Lyapunov function methods, respectively. Furthermore, it has been proven that the proposed L-switching-cycle can eventually achieve the nonconservativeness in stability analysis as long as a sufficiently long L-switching-cycle is adopted. A numerical example is provided to illustrate our theoretical results.
In this paper, a link between monotonicity of deterministic dynamical systems and propagation of order by Markov processes is established. The order propagation has received considerable attention in the literature, however, this notion is still not fully understood. The main contribution of this paper is a study of the order propagation in the deterministic setting, which potentially can provide new techniques for analysis in the stochastic one. We take a close look at the propagation of the so-called increasing and increasing convex orders. Infinitesimal characterisations of these orders are derived, which resemble the well-known Kamke conditions for monotonicity. It is shown that increasing order is equivalent to the standard monotonicity, while the class of systems propagating the increasing convex order is equivalent to the class of monotone systems with convex vector fields. The paper is concluded by deriving a novel result on order propagating diffusion processes and an application of this result to biological processes.
In this paper we study sufficient conditions for metric subregularity of a set-valued map which is the sum of a single-valued continuous map and a locally closed subset. First we derive a sufficient condition for metric subregularity which is weaker than the so-called first-order sufficient condition for metric subregularity (FOSCMS) by adding an extra sequential condition. Then we introduce a directional version of the quasi-normality and the pseudo-normality which is stronger than the new {weak} sufficient condition for metric subregularity but is weaker than the classical quasi-normality and pseudo-normality respectively. Moreover we introduce a nonsmooth version of the second-order sufficient condition for metric subregularity and show that it is a sufficient condition for the new sufficient condition for metric {sub}regularity to hold. An example is used to illustrate that the directional pseduo-normality can be weaker than FOSCMS. For the class of set-valued maps where the single-valued mapping is affine and the abstract set is the union of finitely many convex polyhedral sets, we show that the pseudo-normality and hence the directional pseudo-normality holds automatically at each point of the graph. Finally we apply our results to the complementarity and the Karush-Kuhn-Tucker systems.
Convergence of the gradient descent algorithm has been attracting renewed interest due to its utility in deep learning applications. Even as multiple variants of gradient descent were proposed, the assumption that the gradient of the objective is Lipschitz continuous remained an integral part of the analysis until recently. In this work, we look at convergence analysis by focusing on a property that we term as concavifiability, instead of Lipschitz continuity of gradients. We show that concavifiability is a necessary and sufficient condition to satisfy the upper quadratic approximation which is key in proving that the objective function decreases after every gradient descent update. We also show that any gradient Lipschitz function satisfies concavifiability. A constant known as the concavifier analogous to the gradient Lipschitz constant is derived which is indicative of the optimal step size. As an application, we demonstrate the utility of finding the concavifier the in convergence of gradient descent through an example inspired by neural networks. We derive bounds on the concavifier to obtain a fixed step size for a single hidden layer ReLU network.
Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in control theory, machine learning, and discrete geometry. This class of optimization problems, known as rank minimization, is NP-HARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic algorithm replaces the rank function with the nuclear norm--equal to the sum of the singular values--of the decision variable. In this paper, we provide a necessary and sufficient condition that quantifies when this heuristic successfully finds the minimum rank solution of a linear constraint set. We additionally provide a probability distribution over instances of the affine rank minimization problem such that instances sampled from this distribution satisfy our conditions for success with overwhelming probability provided the number of constraints is appropriately large. Finally, we give empirical evidence that these probabilistic bounds provide accurate predictions of the heuristics performance in non-asymptotic scenarios.