No Arabic abstract
Optimized Pulse Patterns (OPPs) are gaining increasing popularity in the power electronics community over the well-studied pulse width modulation due to their inherent ability to provide the switching instances that optimize current harmonic distortions. In particular, the OPP problem minimizes current harmonic distortions under a cardinality constraint on the number of switching instances per fundamental wave period. The OPP problem is, however, non-convex involving both polynomials and trigonometric functions. In the existing literature, the OPP problem is solved using off-the-shelf solvers with local convergence guarantees. To obtain guarantees of global optimality, we employ and extend techniques from polynomial optimization literature and provide a solution with a global convergence guarantee. Specifically, we propose a polynomial approximation to the OPP problem to then utilize well-studied globally convergent convex relaxation hierarchies, namely, semi-definite programming and relative entropy relaxations. The resulting hierarchy is proven to converge to the global optimal solution. Our method exhibits a strong performance for OPP problems up to 50 switching instances per quarter wave.
Optimal power flow (OPF) is the fundamental mathematical model in power system operations. Improving the solution quality of OPF provide huge economic and engineering benefits. The convex reformulation of the original nonconvex alternating current OPF (ACOPF) model gives an efficient way to find the global optimal solution of ACOPF but suffers from the relaxation gaps. The existence of relaxation gaps hinders the practical application of convex OPF due to the AC-infeasibility problem. We evaluate and improve the tightness of the convex ACOPF model in this paper. Various power networks and nodal loads are considered in the evaluation. A unified evaluation framework is implemented in Julia programming language. This evaluation shows the sensitivity of the relaxation gap and helps to benchmark the proposed tightness reinforcement approach (TRA). The proposed TRA is based on the penalty function method which penalizes the power loss relaxation in the objective function of the convex ACOPF model. A heuristic penalty algorithm is proposed to find the proper penalty parameter of the TRA. Numerical results show relaxation gaps exist in test cases especially for large-scale power networks under low nodal power loads. TRA is effective to reduce the relaxation gap of the convex ACOPF model.
In this paper, we propose a relaxation to the stochastic ruler method originally described by Yan and Mukai in 1992 for asymptotically determining the global optima of discrete simulation optimization problems. We show that our proposed variant of the stochastic ruler method provides accelerated convergence to the optimal solution by providing computational results for two example problems, each of which support the better performance of the variant of the stochastic ruler over the original. We then provide the theoretical grounding for the asymptotic convergence in probability of the variant to the global optimal solution under the same set of assumptions as those underlying the original stochastic ruler method.
Numerical tools for constraints solving are a cornerstone to control verification problems. This is evident by the plethora of research that uses tools like linear and convex programming for the design of control systems. Nevertheless, the capability of linear and convex programming is limited and is not adequate to reason about general nonlinear polynomials constraints that arise naturally in the design of nonlinear systems. This limitation calls for new solvers that are capable of utilizing the power of linear and convex programming to reason about general multivariate polynomials. In this paper, we propose PolyAR, a highly parallelizable solver for polynomial inequality constraints. PolyAR provides several key contributions. First, it uses convex relaxations of the problem to accelerate the process of finding a solution to the set of the non-convex multivariate polynomials. Second, it utilizes an iterative convex abstraction refinement process which aims to prune the search space and identify regions for which the convex relaxation fails to solve the problem. Third, it allows for a highly parallelizable usage of off-the-shelf solvers to analyze the regions in which the convex relaxation failed to provide solutions. We compared the scalability of PolyAR against Z3 8.9 and Yices 2.6 on control designing problems. Finally, we demonstrate the performance of PolyAR on designing switching signals for continuous-time linear switching systems.
The basic reproduction number $R_0$ is a fundamental quantity in epidemiological modeling, reflecting the typical number of secondary infections that arise from a single infected individual. While $R_0$ is widely known to scientists, policymakers, and the general public, it has received comparatively little attention in the controls community. This note provides two novel characterizations of $R_0$: a stability characterization and a geometric program characterization. The geometric program characterization allows us to write $R_0$-constrained and budget-constrained optimal resource allocation problems as geometric programs, which are easily transformed into convex optimization problems. We apply these programs to a case study of allocating vaccines and antidotes, finding that targeting $R_0$ instead of the spectral abscissa of the Jacobian matrix (a common target in the controls literature) leads to qualitatively different solutions.
We propose a framework to use Nesterovs accelerated method for constrained convex optimization problems. Our approach consists of first reformulating the original problem as an unconstrained optimization problem using a continuously differentiable exact penalty function. This reformulation is based on replacing the Lagrange multipliers in the augmented Lagrangian of the original problem by Lagrange multiplier functions. The expressions of these Lagrange multiplier functions, which depend upon the gradients of the objective function and the constraints, can make the unconstrained penalty function non-convex in general even if the original problem is convex. We establish sufficient conditions on the objective function and the constraints of the original problem under which the unconstrained penalty function is convex. This enables us to use Nesterovs accelerated gradient method for unconstrained convex optimization and achieve a guaranteed rate of convergence which is better than the state-of-the-art first-order algorithms for constrained convex optimization. Simulations illustrate our results.