ترغب بنشر مسار تعليمي؟ اضغط هنا

An Algorithm-Independent Measure of Progress for Linear Constraint Propagation

59   0   0.0 ( 0 )
 نشر من قبل Boro Sofranac
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Propagation of linear constraints has become a crucial sub-routine in modern Mixed-Integer Programming (MIP) solvers. In practice, iterative algorithms with tolerance-based stopping criteria are used to avoid problems with slow or infinite convergence. However, these heuristic stopping criteria can pose difficulties for fairly comparing the efficiency of different implementations of iterative propagation algorithms in a real-world setting. Most significantly, the presence of unbounded variable domains in the problem formulation makes it difficult to quantify the relative size of reductions performed on them. In this work, we develop a method to measure -- independently of the algorithmic design -- the progress that a given iterative propagation procedure has made at a given point in time during its execution. Our measure makes it possible to study and better compare the behavior of bounds propagation algorithms for linear constraints. We apply the new measure to answer two questions of practical relevance: (i) We investigate to what extent heuristic stopping criteria can lead to premature termination on real-world MIP instances. (ii) We compare a GPU-parallel propagation algorithm against a sequential state-of-the-art implementation and show that the parallel version is even more competitive in a real-world setting than originally reported.



قيم البحث

اقرأ أيضاً

The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of $m$ linear inequalities in $n$ variables $(P): A^{top}x le u$ when its set of solutions has positive volume. However, when $(P)$ is infeasible, the ellipsoid algorithm has no mechanism for proving that $(P)$ is infeasible. This is in contrast to the other two fundamental algorithms for tackling $(P)$, namely the simplex method and interior-point methods, each of which can be easily implemented in a way that either produces a solution of $(P)$ or proves that $(P)$ is infeasible by producing a solution to the alternative system $mathrm{({it Alt})}: Alambda= 0$, $u^{top}lambda < 0$, $lambda ge 0$. This paper develops an Oblivious Ellipsoid Algorithm (OEA) that either produces a solution of $(P)$ or produces a solution of $mathrm{({it Alt})}$. Depending on the dimensions and on other natural condition measures, the computational complexity of the basic OEA may be worse than, the same as, or better than that of the standard ellipsoid algorithm. We also present two modifi
We introduce a class of specially structured linear programming (LP) problems, which has favorable modeling capability for important application problems in different areas such as optimal transport, discrete tomography and economics. To solve these generally large-scale LP problems efficiently, we design an implementable inexact entropic proximal point algorithm (iEPPA) combined with an easy-to-implement dual block coordinate descent method as a subsolver. Unlike existing entropy-type proximal point algorithms, our iEPPA employs a more practically checkable stopping condition for solving the associated subproblems while achieving provable convergence. Moreover, when solving the capacity constrained multi-marginal optimal transport (CMOT) problem (a special case of our LP problem), our iEPPA is able to bypass the underlying numerical instability issues that often appear in the popular entropic regularization approach, since our algorithm does not require the proximal parameter to be very small in order to obtain an accurate approximate solution. Numerous numerical experiments show that our iEPPA is highly efficient and robust for solving large-scale CMOT problems, in comparison to the (stabilized) Dykstras algorithm and the commercial solver Gurobi. Moreover, the experiments on discrete tomography also highlight the potential modeling power of our model.
This paper addresses the problem of distributed coordination control of spacecraft formation. It is assumed that the agents measure relative positions of each other with a non-zero, unknown constant sensor bias. The translational dynamics of the spac ecraft is expressed in Euler-Lagrangian form. We propose a novel distributed, model independent control law for synchronization of networked Euler Lagrange system with biased measurements. An adaptive control law is derived based on Lyapunov analysis to estimate the bias. The proposed algorithm ensures that the velocities converge to that of leader exponentially while the positions converge to a bounded neighborhood of the leader positions. We have assumed a connected leader-follower network of spacecraft. Simulation results on a six spacecraft formation corroborate our theoretical findings.
We revisit Matrix Balancing, a pre-conditioning task used ubiquitously for computing eigenvalues and matrix exponentials. Since 1960, Osbornes algorithm has been the practitioners algorithm of choice and is now implemented in most numerical software packages. However, its theoretical properties are not well understood. Here, we show that a simple random variant of Osbornes algorithm converges in near-linear time in the input sparsity. Specifically, it balances $Kinmathbb{R}_{geq 0}^{ntimes n}$ after $O(mepsilon^{-2}logkappa)$ arithmetic operations, where $m$ is the number of nonzeros in $K$, $epsilon$ is the $ell_1$ accuracy, and $kappa=sum_{ij}K_{ij}/(min_{ij:K_{ij} eq 0}K_{ij})$ measures the conditioning of $K$. Previous work had established near-linear runtimes either only for $ell_2$ accuracy (a weaker criterion which is less relevant for applications), or through an entirely different algorithm based on (currently) impractical Laplacian solvers. We further show that if the graph with adjacency matrix $K$ is moderately connected--e.g., if $K$ has at least one positive row/column pair--then Osbornes algorithm initially converges exponentially fast, yielding an improved runtime $O(mepsilon^{-1}logkappa)$. We also address numerical precision by showing that these runtime bounds still hold when using $O(log(nkappa/epsilon))$-bit numbers. Our results are established through an intuitive potential argument that leverages a convex optimization perspective of Osbornes algorithm, and relates the per-iteration progress to the current imbalance as measured in Hellinger distance. Unlike previous analyses, we critically exploit log-convexity of the potential. Our analysis extends to other variants of Osbornes algorithm: along the way, we establish significantly improved runtime bounds for cyclic, greedy, and parallelized variants.
We study a class of countably-infinite-dimensional linear programs (CILPs) whose feasible sets are bounded subsets of appropriately defined spaces of measures. The optimal value, optimal points, and minimal points of these CILPs can be approximated b y solving finite-dimensional linear programs. We show how to construct finite-dimensional programs that lead to approximations with easy-to-evaluate error bounds, and we prove that the errors converge to zero as the size of the finite-dimensional programs approaches that of the original problem. We discuss the use of our methods in the computation of the stationary distributions, occupation measures, and exit distributions of Markov~chains.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا