Do you want to publish a course? Click here

Escaping spurious local minimum trajectories in online time-varying nonconvex optimization

141   0   0.0 ( 0 )
 Added by Yuhao Ding
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

A major limitation of online algorithms that track the optimizers of time-varying nonconvex optimization problems is that they focus on a specific local minimum trajectory, which may lead to poor spurious local solutions. In this paper, we show that the natural temporal variation may help simple online tracking methods find and track time-varying global minima. To this end, we investigate the properties of a time-varying projected gradient flow system with inertia, which can be regarded as the continuous-time limit of (1) the optimality conditions for a discretized sequential optimization problem with a proximal regularization and (2) the online tracking scheme. We introduce the notion of the dominant trajectory and show that the inherent temporal variation could re-shape the landscape of the Lagrange functional and help a proximal algorithm escape the spurious local minimum trajectories if the global minimum trajectory is dominant. For a problem with twice continuously differentiable objective function and constraints, sufficient conditions are derived to guarantee that no matter how a local search method is initialized, it will track a time-varying global solution after some time. The results are illustrated on a benchmark example with many local minima.



rate research

Read More

This paper considers a time-varying optimization problem associated with a network of systems, with each of the systems shared by (and affecting) a number of individuals. The objective is to minimize cost functions associated with the individuals preferences, which are unknown, subject to time-varying constraints that capture physical or operational limits of the network. To this end, the paper develops a distributed online optimization algorithm with concurrent learning of the cost functions. The cost functions are learned on-the-fly based on the users feedback (provided at irregular intervals) by leveraging tools from shape-constrained Gaussian Processes. The online algorithm is based on a primal-dual method, and acts effectively in a closed-loop fashion where: i) users feedback is utilized to estimate the cost, and ii) measurements from the network are utilized in the algorithmic steps to bypass the need for sensing of (unknown) exogenous inputs of the network. The performance of the algorithm is analyzed in terms of dynamic network regret and constraint violation. Numerical examples are presented in the context of real-time optimization of distributed energy resources.
Distributed optimization is concerned with using local computation and communication to realize a global aim of optimizing the sum of local objective functions. It has gained wide attention for a variety of applications in networked systems. This paper addresses a class of constrained distributed nonconvex optimization problems involving univariate objective functions, aiming to achieve global optimization without requiring local evaluations of gradients at every iteration. We propose a novel algorithm named CPCA, exploiting the notion of combining Chebyshev polynomial approximation, average consensus and polynomial optimization. The proposed algorithm is i) able to obtain $epsilon$ globally optimal solutions for any arbitrarily small given accuracy $epsilon$, ii) efficient in terms of both zeroth-order queries (i.e., evaluations of function values) and inter-agent communication, and iii) distributed terminable when the specified precision requirement is met. The key insight is to use polynomial approximations to substitute for general objective functions, and turn to solve an easier approximate version of the original problem. Due to the nice analytic properties owned by polynomials, this approximation not only facilitates efficient global optimization, but also allows the design of gradient-free iterations to reduce cumulative costs of queries and achieve geometric convergence when nonconvex problems are solved. We provide comprehensive analysis of the accuracy and complexities of the proposed algorithm.
There is an increasing interest in designing differentiators, which converge exactly before a prespecified time regardless of the initial conditions, i.e., which are fixed-time convergent with a predefined Upper Bound of their Settling Time (UBST), due to their ability to solve estimation and control problems with time constraints. However, for the class of signals with a known bound of their $(n+1)$-th time derivative, the existing design methodologies are either only available for first-order differentiators, yielding a very conservative UBST, or result in gains that tend to infinity at the convergence time. Here, we introduce a new methodology based on time-varying gains to design arbitrary-order exact differentiators with a predefined UBST. This UBST is a priori set as one parameter of the algorithm. Our approach guarantees that the UBST can be set arbitrarily tight, and we also provide sufficient conditions to obtain exact convergence while maintaining bounded time-varying gains. Additionally, we provide necessary and sufficient conditions such that our approach yields error dynamics with a uniformly Lyapunov stable equilibrium. Our results show how time-varying gains offer a general and flexible methodology to design algorithms with a predefined UBST.
We present a method to over-approximate reachable tubes over compact time-intervals, for linear continuous-time, time-varying control systems whose initial states and inputs are subject to compact convex uncertainty. The method uses numerical approximations of transition matrices, is convergent of first order, and assumes the ability to compute with compact convex sets in finite dimension. We also present a variant that applies to the case of zonotopic uncertainties, uses only linear algebraic operations, and yields zonotopic over-approximations. The performance of the latter variant is demonstrated on an example.
We study predictive control in a setting where the dynamics are time-varying and linear, and the costs are time-varying and well-conditioned. At each time step, the controller receives the exact predictions of costs, dynamics, and disturbances for the future $k$ time steps. We show that when the prediction window $k$ is sufficiently large, predictive control is input-to-state stable and achieves a dynamic regret of $O(lambda^k T)$, where $lambda < 1$ is a positive constant. This is the first dynamic regret bound on the predictive control of linear time-varying systems. Under more assumptions on the terminal costs, we also show that predictive control obtains the first competitive bound for the control of linear time-varying systems: $1 + O(lambda^k)$. Our results are derived using a novel proof framework based on a perturbation bound that characterizes how a small change to the system parameters impacts the optimal trajectory.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا