ترغب بنشر مسار تعليمي؟ اضغط هنا

Simple and Fast Algorithm for Binary Integer and Online Linear Programming

130   0   0.0 ( 0 )
 نشر من قبل Xiaocheng Li
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we develop a simple and fast online algorithm for solving a class of binary integer linear programs (LPs) arisen in general resource allocation problem. The algorithm requires only one single pass through the input data and is free of doing any matrix inversion. It can be viewed as both an approximate algorithm for solving binary integer LPs and a fast algorithm for solving online LP problems. The algorithm is inspired by an equivalent form of the dual problem of the relaxed LP and it essentially performs (one-pass) projected stochastic subgradient descent in the dual space. We analyze the algorithm in two different models, stochastic input and random permutation, with minimal technical assumptions on the input data. The algorithm achieves $Oleft(m sqrt{n}right)$ expected regret under the stochastic input model and $Oleft((m+log n)sqrt{n}right)$ expected regret under the random permutation model, and it achieves $O(m sqrt{n})$ expected constraint violation under both models, where $n$ is the number of decision variables and $m$ is the number of constraints. The algorithm enjoys the same performance guarantee when generalized to a multi-dimensional LP setting which covers a wider range of applications. In addition, we employ the notion of permutational Rademacher complexity and derive regret bounds for two earlier online LP algorithms for comparison. Both algorithms improve the regret bound with a factor of $sqrt{m}$ by paying more computational cost. Furthermore, we demonstrate how to convert the possibly infeasible solution to a feasible one through a randomized procedure. Numerical experiments illustrate the general applicability and effectiveness of the algorithms.



قيم البحث

اقرأ أيضاً

A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with n variables and a constraint matrix with dual tree-depth d and largest entry D are solvable in time g(d,D)poly(n) for so me function g. However, the dual tree-depth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual tree-depth, and thus does not reflect its geometric structure. We prove that the minimum dual tree-depth of a row-equivalent matrix is equal to the branch-depth of the matroid defined by the columns of the matrix. We design a fixed parameter algorithm for computing branch-depth of matroids represented over a finite field and a fixed parameter algorithm for computing a row-equivalent matrix with minimum dual tree-depth. Finally, we use these results to obtain an algorithm for integer programming running in time g(d*,D)poly(n) where d* is the branch-depth of the constraint matrix; the branch-depth cannot be replaced by the more permissive notion of branch-width.
Recently a novel framework has been proposed for designing the molecular structure of chemical compounds using both artificial neural networks (ANNs) and mixed integer linear programming (MILP). In the framework, we first define a feature vector $f(C )$ of a chemical graph $C$ and construct an ANN that maps $x=f(C)$ to a predicted value $eta(x)$ of a chemical property $pi$ to $C$. After this, we formulate an MILP that simulates the computation process of $f(C)$ from $C$ and that of $eta(x)$ from $x$. Given a target value $y^*$ of the chemical property $pi$, we infer a chemical graph $C^dagger$ such that $eta(f(C^dagger))=y^*$ by solving the MILP. In this paper, we use linear regression to construct a prediction function $eta$ instead of ANNs. For this, we derive an MILP formulation that simulates the computation process of a prediction function by linear regression. The results of computational experiments suggest our method can infer chemical graphs with around up to 50 non-hydrogen atoms.
In this paper, we develop a new algorithm combining the idea of ``boosting with the first-order algorithm to approximately solve a class of (Integer) Linear programs(LPs) arisen in general resource allocation problems. Not only can this algorithm sol ve LPs directly, but also can be applied to accelerate the Column Generation method. As a direct solver, our algorithm achieves a provable $O(sqrt{n/K})$ optimality gap, where $n$ is the number of variables and $K$ is the number of data duplication bearing the same intuition as the boosting algorithm. We use numerical experiments to demonstrate the effectiveness of our algorithm and several variants.
78 - Xiaocheng Li , Yinyu Ye 2019
We study an online linear programming (OLP) problem under a random input model in which the columns of the constraint matrix along with the corresponding coefficients in the objective function are generated i.i.d. from an unknown distribution and rev ealed sequentially over time. Virtually all pre-existing online algorithms were based on learning the dual optimal solutions/prices of the linear programs (LP), and their analyses were focused on the aggregate objective value and solving the packing LP where all coefficients in the constraint matrix and objective are nonnegative. However, two major open questions were: (i) Does the set of LP optimal dual prices learned in the pre-existing algorithms converge to those of the offline LP, and (ii) Could the results be extended to general LP problems where the coefficients can be either positive or negative. We resolve these two questions by establishing convergence results for the dual prices under moderate regularity conditions for general LP problems. Specifically, we identify an equivalent form of the dual problem which relates the dual LP with a sample average approximation to a stochastic program. Furthermore, we propose a new type of OLP algorithm, Action-History-Dependent Learning Algorithm, which improves the previous algorithm performances by taking into account the past input data as well as decisions/actions already made. We derive an $O(log n log log n)$ regret bound (under a locally strong convexity and smoothness condition) for the proposed algorithm, against the $O(sqrt{n})$ bound for typical dual-price learning algorithms, where $n$ is the number of decision variables. Numerical experiments demonstrate the effectiveness of the proposed algorithm and the action-history-dependent design.
We consider the problem of mapping a logical quantum circuit onto a given hardware with limited two-qubit connectivity. We model this problem as an integer linear program, using a network flow formulation with binary variables that includes the initi al allocation of qubits and their routing. We consider several cost functions: an approximation of the fidelity of the circuit, its total depth, and a measure of cross-talk, all of which can be incorporated in the model. Numerical experiments on synthetic data and different hardware topologies indicate that the error rate and depth can be optimized simultaneously without significant loss. We test our algorithm on a large number of quantum volume circuits, optimizing for error rate and depth; our algorithm significantly reduces the number of CNOTs compared to Qiskits default transpiler SABRE, and produces circuits that, when executed on hardware, exhibit higher fidelity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا