No Arabic abstract
Brocketts necessary condition yields a test to determine whether a system can be made to stabilize about some operating point via continuous, purely state-dependent feedback. For many real-world systems, however, one wants to stabilize sets which are more general than a single point. One also wants to control such systems to operate safely by making obstacles and other dangerous sets repelling. We generalize Brocketts necessary condition to the case of stabilizing general compact subsets having a nonzero Euler characteristic. Using this generalization, we also formulate a necessary condition for the existence of safe control laws. We illustrate the theory in concrete examples and for some general classes of systems including a broad class of nonholonomically constrained Lagrangian systems. We also show that, for the special case of stabilizing a point, the specialization of our general stabilizability test is stronger than Brocketts.
The bilevel program is an optimization problem where the constraint involves solutions to a parametric optimization problem. It is well-known that the value function reformulation provides an equivalent single-level optimization problem but it results in a nonsmooth optimization problem which never satisfies the usual constraint qualification such as the Mangasarian-Fromovitz constraint qualification (MFCQ). In this paper we show that even the first order sufficient condition for metric subregularity (which is in general weaker than MFCQ) fails at each feasible point of the bilevel program. We introduce the concept of directional calmness condition and show that under {the} directional calmness condition, the directional necessary optimality condition holds. {While the directional optimality condition is in general sharper than the non-directional one,} the directional calmness condition is in general weaker than the classical calmness condition and hence is more likely to hold. {We perform the directional sensitivity analysis of the value function and} propose the directional quasi-normality as a sufficient condition for the directional calmness. An example is given to show that the directional quasi-normality condition may hold for the bilevel program.
Convergence of the gradient descent algorithm has been attracting renewed interest due to its utility in deep learning applications. Even as multiple variants of gradient descent were proposed, the assumption that the gradient of the objective is Lipschitz continuous remained an integral part of the analysis until recently. In this work, we look at convergence analysis by focusing on a property that we term as concavifiability, instead of Lipschitz continuity of gradients. We show that concavifiability is a necessary and sufficient condition to satisfy the upper quadratic approximation which is key in proving that the objective function decreases after every gradient descent update. We also show that any gradient Lipschitz function satisfies concavifiability. A constant known as the concavifier analogous to the gradient Lipschitz constant is derived which is indicative of the optimal step size. As an application, we demonstrate the utility of finding the concavifier the in convergence of gradient descent through an example inspired by neural networks. We derive bounds on the concavifier to obtain a fixed step size for a single hidden layer ReLU network.
At the quantum level, feedback-loops have to take into account measurement back-action. We present here the structure of the Markovian models including such back-action and sketch two stabilization methods: measurement-based feedback where an open quantum system is stabilized by a classical controller; coherent or autonomous feedback where a quantum system is stabilized by a quantum controller with decoherence (reservoir engineering). We begin to explain these models and methods for the photon box experiments realized in the group of Serge Haroche (Nobel Prize 2012). We present then these models and methods for general open quantum systems.
We are interested in the control of forming processes for nonlinear material models. To develop an online control we derive a novel feedback law and prove a stabilization result. The derivation of the feedback control law is based on a Laypunov analysis of the time-dependent viscoplastic material models. The derivation uses the structure of the underlying partial differential equation for the design of the feedback control. Analytically, exponential decay of the time evolution of perturbations to desired stress--strain states is shown. We test the new control law numerically by coupling it to a finite element simulation of a deformation process.
This paper introduces and studies the optimal control problem with equilibrium constraints (OCPEC). The OCPEC is an optimal control problem with a mixed state and control equilibrium constraint formulated as a complementarity constraint and it can be seen as a dynamic mathematical program with equilibrium constraints. It provides a powerful modeling paradigm for many practical problems such as bilevel optimal control problems and dynamic principal-agent problems. In this paper, we propose weak, Clarke, Mordukhovich and strong stationarities for the OCPEC. Moreover, we give some sufficient conditions to ensure that the local minimizers of the OCPEC are Fritz John type weakly stationary, Mordukhovich stationary and strongly stationary, respectively. Unlike Pontryagains maximum principle for the classical optimal control problem with equality and inequality constraints, a counter example shows that for general OCPECs, there may exist two sets of multipliers for the complementarity constraints. A condition under which these two sets of multipliers coincide is given.