No Arabic abstract
We describe a new approach to certifying the global nonnegativity of multivariate polynomials by solving hyperbolic optimization problems---a class of convex optimization problems that generalize semidefinite programs. We show how to produce families of nonnegative polynomials (which we call hyperbolic certificates of nonnegativity) from any hyperbolic polynomial. We investigate the pairs $(n,d)$ for which there is a hyperbolic polynomial of degree $d$ in $n$ variables such that an associated hyperbolic certificate of nonnegativity is not a sum of squares. If $dgeq 4$ we show that this occurs whenever $ngeq 4$. In the degree three case, we find an explicit hyperbolic cubic in $43$ variables that gives hyperbolic certificates that are not sums of squares. As a corollary, we obtain the first known hyperbolic cubic no power of which has a definite determinantal representation. Our approach also allows us to show that, given a cubic $p$, and a direction $e$, the decision problem Is $p$ hyperbolic with respect to $e$? is co-NP hard.
There has been work on exploiting polynomial approximation to solve distributed nonconvex optimization problems involving univariate objectives. This idea facilitates arbitrarily precise global optimization without requiring local evaluations of gradients at every iteration. Nonetheless, there remains a gap between existing theoretical guarantees and diverse practical requirements for dependability, notably privacy preservation and robustness to network imperfections (e.g., time-varying directed communication and asynchrony). To fill this gap and keep the above strengths, we propose a Dependable Chebyshev-Proxy-based distributed Optimization Algorithm (D-CPOA). Specifically, to ensure both accuracy of solutions and privacy of local objectives, a new privacy-preserving mechanism is designed. This mechanism leverages the randomness in blockwise insertions of perturbed vector states and hence provides an improved privacy guarantee compared to the literature in terms of ($alpha,beta$)-data-privacy. Furthermore, to gain robustness to various network imperfections, we use the push-sum consensus protocol as a backbone, discuss its specific enhancements, and evaluate the performance of the proposed algorithm accordingly. Thanks to the linear consensus-based structure of iterations, we avoid the privacy-accuracy trade-off and the bother of selecting appropriate step-sizes in different settings. We provide rigorous analysis of the accuracy, dependability and complexity. It is shown that the advantages brought by the idea of polynomial approximation are maintained when all the above requirements exist. Simulations demonstrate the effectiveness of the developed algorithm.
This paper is concerned with certifying that a given point is near an exact root of an overdetermined or singular polynomial system with rational coefficients. The difficulty lies in the fact that consistency of overdetermined systems is not a continuous property. Our certification is based on hybrid symbolic-numeric methods to compute the exact rational univariate representation (RUR) of a component of the input system from approximate roots. For overdetermined polynomial systems with simple roots, we compute an initial RUR from approximate roots. The accuracy of the RUR is increased via Newton iterations until the exact RUR is found, which we certify using exact arithmetic. Since the RUR is well-constrained, we can use it to certify the given approximate roots using alpha-theory. To certify isolated singular roots, we use a determinantal form of the isosingular deflation, which adds new polynomials to the original system without introducing new variables. The resulting polynomial system is overdetermined, but the roots are now simple, thereby reducing the problem to the overdetermined case. We prove that our algorithms have complexity that are polynomial in the input plus the output size upon successful convergence, and we use worst case upper bounds for termination when our iteration does not converge to an exact RUR. Examples are included to demonstrate the approach.
This paper studies stochastic optimization problems with polynomials. We propose an optimization model with sample averages and perturbations. The Lasserre type Moment-SOS relaxations are used to solve the sample average optimization. Properties of the optimization and its relaxations are studied. Numerical experiments are presented.
The multi-objective optimization is to optimize several objective functions over a common feasible set. Since the objectives usually do not share a common optimizer, people often consider (weakly) Pareto points. This paper studies multi-objective optimization problems that are given by polynomial functions. First, we study the convex geometry for (weakly) Pareto values and give a convex representation for them. Linear scalarization problems (LSPs) and Chebyshev scalarization problems (CSPs) are typical approaches for getting (weakly) Pareto points. For LSPs, we show how to use tight relaxations to solve them, how to detect existence or nonexistence of proper weights. For CSPs, we show how to solve them by moment relaxations. Moreover, we show how to check if a given point is a (weakly) Pareto point or not and how to detect existence or nonexistence of (weakly) Pareto points. We also study how to detect unboundedness of polynomial optimization, which is used to detect nonexistence of proper weights or (weakly) Pareto points.
Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks. In this paper, we propose a convex program, in the form of a Linear Matrix Inequality (LMI), to certify incremental quadratic constraints on the map of neural networks over a region of interest. These certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction. We illustrate the utility of our approach in two different settings. First, we develop a semidefinite program to compute guaranteed and sharp upper bounds on the local Lipschitz constant of neural networks and illustrate the results on random networks as well as networks trained on MNIST. Second, we consider a linear time-invariant system in feedback with an approximate model predictive controller parameterized by a neural network. We then turn the stability analysis into a semidefinite feasibility program and estimate an ellipsoidal invariant set for the closed-loop system.