ترغب بنشر مسار تعليمي؟ اضغط هنا

Fractional Decomposition Tree Algorithm: A tool for studying the integrality gap of Integer Programs

50   0   0.0 ( 0 )
 نشر من قبل Arash Haddadan
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a new algorithm, Fractional Decomposition Tree (FDT) for finding a feasible solution for an integer program (IP) where all variables are binary. FDT runs in polynomial time and is guaranteed to find a feasible integer solution provided the integrality gap is bounded. The algorithm gives a construction for Carr and Vempalas theorem that any feasible solution to the IPs linear-programming relaxation, when scaled by the instance integrality gap, dominates a convex combination of feasible solutions. FDT is also a tool for studying the integrality gap of IP formulations. We demonstrate that with experiments studying the integrality gap of two problems: optimally augmenting a tree to a 2-edge-connected graph and finding a minimum-cost 2-edge-connected multi-subgraph (2EC). We also give a simplified algorithm, Dom2IP, that more quickly determines if an instance has an unbounded integrality gap. We show that FDTs speed and approximation quality compare well to that of feasibility pump on moderate-sized instances of the vertex cover problem. For a particular set of hard-to-decompose fractional 2EC solutions, FDT always gave a better integer solution than the best previous approximation algorithm (Christofides).



قيم البحث

اقرأ أيضاً

Inspired by the decomposition in the hybrid quantum-classical optimization algorithm we introduced in arXiv:1902.04215, we propose here a new (fully classical) approach to solving certain non-convex integer programs using Graver bases. This method is well suited when (a) the constraint matrix $A$ has a special structure so that its Graver basis can be computed systematically, (b) several feasible solutions can also be constructed easily and (c) the objective function can be viewed as many convex functions quilted together. Classes of problems that satisfy these conditions include Cardinality Boolean Quadratic Problems (CBQP), Quadratic Semi-Assignment Problems (QSAP) and Quadratic Assignment Problems (QAP). Our Graver Augmented Multi-seed Algorithm (GAMA) utilizes augmentation along Graver basis elements (the improvement direction is obtained by comparing objective function values) from these multiple initial feasible solutions. We compare our approach with a best-in-class commercially available solver (Gurobi). Sensitivity analysis indicates that the rate at which GAMA slows down as the problem size increases is much lower than that of Gurobi. We find that for several instances of practical relevance, GAMA not only vastly outperforms in terms of time to find the optimal solution (by two or three orders of magnitude), but also finds optimal solutions within minutes when the commercial solver is not able to do so in 4 or 10 hours (depending on the problem class) in several cases.
Large Neighborhood Search (LNS) is a combinatorial optimization heuristic that starts with an assignment of values for the variables to be optimized, and iteratively improves it by searching a large neighborhood around the current assignment. In this paper we consider a learning-based LNS approach for mixed integer programs (MIPs). We train a Neural Diving model to represent a probability distribution over assignments, which, together with an off-the-shelf MIP solver, generates an initial assignment. Formulating the subsequent search steps as a Markov Decision Process, we train a Neural Neighborhood Selection policy to select a search neighborhood at each step, which is searched using a MIP solver to find the next assignment. The policy network is trained using imitation learning. We propose a target policy for imitation that, given enough compute resources, is guaranteed to select the neighborhood containing the optimal next assignment amongst all possible choices for the neighborhood of a specified size. Our approach matches or outperforms all the baselines on five real-world MIP datasets with large-scale instances from diverse applications, including two production applications at Google. It achieves $2times$ to $37.8times$ better average primal gap than the best baseline on three of the datasets at large running times.
Error-bounded lossy compression is becoming more and more important to todays extreme-scale HPC applications because of the ever-increasing volume of data generated because it has been widely used in in-situ visualization, data stream intensity reduc tion, storage reduction, I/O performance improvement, checkpoint/restart acceleration, memory footprint reduction, etc. Although many works have optimized ratio, quality, and performance for different error-bounded lossy compressors, there is none of the existing works attempting to systematically understand the impact of lossy compression errors on HPC application due to error propagation. In this paper, we propose and develop a lossy compression fault injection tool, called LCFI. To the best of our knowledge, this is the first fault injection tool that helps both lossy compressor developers and users to systematically and comprehensively understand the impact of lossy compression errors on HPC programs. The contributions of this work are threefold: (1) We propose an efficient approach to inject lossy compression errors according to a statistical analysis of compression errors for different state-of-the-art compressors. (2) We build a fault injector which is highly applicable, customizable, easy-to-use in generating top-down comprehensive results, and demonstrate the use of LCFI. (3) We evaluate LCFI on four representative HPC benchmarks with different abstracted fault models and make several observations about error propagation and their impacts on program outputs.
The canonical tree-decomposition theorem, given by Robertson and Seymour in their seminal graph minors series, turns out to be one of the most important tool in structural and algorithmic graph theory. In this paper, we provide the canonical tree dec omposition theorem for digraphs. More precisely, we construct directed tree-decompositions of digraphs that distinguish all their tangles of order $k$, for any fixed integer $k$, in polynomial time. As an application of this canonical tree-decomposition theorem, we provide the following result for the directed disjoint paths problem: For every fixed $k$ there is a polynomial-time algorithm which, on input $G$, and source and terminal vertices $(s_1, t_1), dots, (s_k, t_k)$, either 1. determines that there is no set of pairwise vertex-disjoint paths connecting each source $s_i$ to its terminal $t_i$, or 2.finds a half-integral solution, i.e., outputs paths $P_1, dots, P_k$ such that $P_i$ links $s_i$ to $t_i$, so that every vertex of the graph is contained in at most two paths. Given known hardness results for the directed disjoint paths problem, our result cannot be improved for general digraphs, neither to fixed-parameter tractability nor to fully vertex-disjoint directed paths. As far as we are aware, this is the first time to obtain a tractable result for the $k$-disjoint paths problem for general digraphs. We expect more applications of our canonical tree-decomposition for directed results.
Tree projections provide a mathematical framework that encompasses all the various (purely) structural decomposition methods that have been proposed in the literature to single out classes of nearly-acyclic (hyper)graphs, such as the tree decompositi on method, which is the most powerful decomposition method on graphs, and the (generalized) hypertree decomposition method, which is its natural counterpart on arbitrary hypergraphs. The paper analyzes this framework, by focusing in particular on minimal tree projections, that is, on tree projections without useless redundancies. First, it is shown that minimal tree projections enjoy a number of properties that are usually required for normal form decompositions in various structural decomposition methods. In particular, they enjoy the same kind of connection properties as (minimal) tree decompositions of graphs, with the result being tight in the light of the negative answer that is provided to the open question about whether they enjoy a slightly stronger notion of connection property, defined to speed-up the computation of hypertree decompositions. Second, it is shown that tree projections admit a natural game-theoretic characterization in terms of the Captain and Robber game. In this game, as for the Robber and Cops game characterizing tree decompositions, the existence of winning strategies implies the existence of monotone ones. As a special case, the Captain and Robber game can be used to characterize the generalized hypertree decomposition method, where such a game-theoretic characterization was missing and asked for. Besides their theoretical interest, these results have immediate algorithmic applications both for the general setting and for structural decomposition methods that can be recast in terms of tree projections.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا