ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Convergence Time of a Natural Dynamics for Linear Programming

78   0   0.0 ( 0 )
 نشر من قبل Vincenzo Bonifaci
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Vincenzo Bonifaci




اسأل ChatGPT حول البحث

We consider a system of nonlinear ordinary differential equations for the solution of linear programming (LP) problems that was first proposed in the mathematical biology literature as a model for the foraging behavior of acellular slime mold Physarum polycephalum, and more recently considered as a method to solve LPs. We study the convergence time of the continuous Physarum dynamics in the context of the linear programming problem, and derive a new time bound to approximate optimality that depends on the relative entropy between project



قيم البحث

اقرأ أيضاً

We revisit Matrix Balancing, a pre-conditioning task used ubiquitously for computing eigenvalues and matrix exponentials. Since 1960, Osbornes algorithm has been the practitioners algorithm of choice and is now implemented in most numerical software packages. However, its theoretical properties are not well understood. Here, we show that a simple random variant of Osbornes algorithm converges in near-linear time in the input sparsity. Specifically, it balances $Kinmathbb{R}_{geq 0}^{ntimes n}$ after $O(mepsilon^{-2}logkappa)$ arithmetic operations, where $m$ is the number of nonzeros in $K$, $epsilon$ is the $ell_1$ accuracy, and $kappa=sum_{ij}K_{ij}/(min_{ij:K_{ij} eq 0}K_{ij})$ measures the conditioning of $K$. Previous work had established near-linear runtimes either only for $ell_2$ accuracy (a weaker criterion which is less relevant for applications), or through an entirely different algorithm based on (currently) impractical Laplacian solvers. We further show that if the graph with adjacency matrix $K$ is moderately connected--e.g., if $K$ has at least one positive row/column pair--then Osbornes algorithm initially converges exponentially fast, yielding an improved runtime $O(mepsilon^{-1}logkappa)$. We also address numerical precision by showing that these runtime bounds still hold when using $O(log(nkappa/epsilon))$-bit numbers. Our results are established through an intuitive potential argument that leverages a convex optimization perspective of Osbornes algorithm, and relates the per-iteration progress to the current imbalance as measured in Hellinger distance. Unlike previous analyses, we critically exploit log-convexity of the potential. Our analysis extends to other variants of Osbornes algorithm: along the way, we establish significantly improved runtime bounds for cyclic, greedy, and parallelized variants.
In this paper, we develop a new algorithm combining the idea of ``boosting with the first-order algorithm to approximately solve a class of (Integer) Linear programs(LPs) arisen in general resource allocation problems. Not only can this algorithm sol ve LPs directly, but also can be applied to accelerate the Column Generation method. As a direct solver, our algorithm achieves a provable $O(sqrt{n/K})$ optimality gap, where $n$ is the number of variables and $K$ is the number of data duplication bearing the same intuition as the boosting algorithm. We use numerical experiments to demonstrate the effectiveness of our algorithm and several variants.
This paper discusses the odds problem, proposed by Bruss in 2000, and its variants. A recurrence relation called a dynamic programming (DP) equation is used to find an optimal stopping policy of the odds problem and its variants. In 2013, Buchbinder, Jain, and Singh proposed a linear programming (LP) formulation for finding an optimal stopping policy of the classical secretary problem, which is a special case of the odds problem. The proposed linear programming problem, which maximizes the probability of a win, differs from the DP equations known for long time periods. This paper shows that an ordinary DP equation is a modification of the dual problem of linear programming including the LP formulation proposed by Buchbinder, Jain, and Singh.
In this paper, we aim to provide a comprehensive analysis on the linear rate convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex composite optimization problems. Under a certain error bound co ndition, we establish the global linear rate of convergence for a more general semi-proximal ADMM with the dual steplength being restricted to be in the open interval $(0, (1+sqrt{5})/2)$. In our analysis, we assume neither the strong convexity nor the strict complementarity except an error bound condition, which holds automatically for convex composite quadratic programming. This semi-proximal ADMM, which includes the classic ADMM, not only has the advantage to resolve the potentially non-solvability issue of the subproblems in the classic ADMM but also possesses the abilities of handling multi-block convex optimization problems efficiently. We shall use convex composite quadratic programming and quadratic semi-definite programming as important applications to demonstrate the significance of the obtained results. Of its own novelty in second-order variational analysis, a complete characterization is provided on the isolated calmness for the nonlinear convex semi-definite optimization problem in terms of its second order sufficient optimality condition and the strict Robinson constraint qualification for the purpose of proving the linear rate convergence of the semi-proximal ADMM when applied to two- and multi-block convex quadratic semi-definite programming.
90 - P. Z. Wang , J. He , H. C. Lui 2021
The existence of strongly polynomial-time algorithm for linear programming is a cross-century international mathematical problem, whose breakthrough will solve a major theoretical crisis for the development of artificial intelligence. In order to mak e it happen, this paper proposes three solving techniques based on the cone-cutting theory: 1. The principle of Highest Selection; 2. The algorithm of column elimination, which is more convenient and effective than the Ye-column elimination theorem; 3. A step-down algorithm for a feasible point horizontally shifts to the center and then falls down to the bottom of the feasible region $D$. There will be a nice work combining three techniques, the tri-skill is variant Simplex algorithm to be expected to help readers building the strong polynomial algorithms. Besides, a variable weight optimization method is proposed in the paper, which opens a new window to bring the linear programming into uncomplicated calculation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا