Do you want to publish a course? Click here

Bridging between 0/1 and Linear Programming via Random Walks

101   0   0.0 ( 0 )
 Added by Joshua Brakensiek
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Under the Strong Exponential Time Hypothesis, an integer linear program with $n$ Boolean-valued variables and $m$ equations cannot be solved in $c^n$ time for any constant $c < 2$. If the domain of the variables is relaxed to $[0,1]$, the associated linear program can of course be solved in polynomial time. In this work, we give a natural algorithmic bridging between these extremes of $0$-$1$ and linear programming. Specifically, for any subset (finite union of intervals) $E subset [0,1]$ containing ${0,1}$, we give a random-walk based algorithm with runtime $O_E((2-text{measure}(E))^ntext{poly}(n,m))$ that finds a solution in $E^n$ to any $n$-variable linear program with $m$ constraints that is feasible over ${0,1}^n$. Note that as $E$ expands from ${0,1}$ to $[0,1]$, the runtime improves smoothly from $2^n$ to polynomial. Taking $E = [0,1/k) cup (1-1/k,1]$ in our result yields as a corollary a randomized $(2-2/k)^{n}text{poly}(n)$ time algorithm for $k$-SAT. While our approach has some high level resemblance to Sch{o}nings beautiful algorithm, our general algorithm is based on a more sophisticated random walk that incorporates several new ingredients, such as a multiplicative potential to measure progress, a judicious choice of starting distribution, and a time varying distribution for the evolution of the random walk that is itself computed via an LP at each step (a solution to which is guaranteed based on the minimax theorem). Plugging the LP algorithm into our earlier polymorphic framework yields fast exponential algorithms for any CSP (like $k$-SAT, $1$-in-$3$-SAT, NAE $k$-SAT) that admit so-called `threshold partial polymorphisms.



rate research

Read More

We show a new way to round vector solutions of semidefinite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by providing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSPs). More concretely, we show for every 2-CSP instance I a rounding algorithm for r rounds of the Lasserre SDP hierarchy for I that obtains an integral solution that is at most eps worse than the relaxations value (normalized to lie in [0,1]), as long as r > kcdotrank_{geq theta}(Ins)/poly(e) ;, where k is the alphabet size of I, $theta=poly(e/k)$, and $rank_{geq theta}(Ins)$ denotes the number of eigenvalues larger than $theta$ in the normalized adjacency matrix of the constraint graph of $Ins$. In the case that $Ins$ is a uniquegames instance, the threshold $theta$ is only a polynomial in $e$, and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for emph{every} instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent subexponential algorithm of Arora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khots Unique Games Conjecture. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases $r$ rounds of our program can be evaluated in time $2^{O(r)}poly(n)$.
The celebrated multi-armed bandit problem in decision theory models the basic trade-off between exploration, or learning about the state of a system, and exploitation, or utilizing the system. In this paper we study the variant of the multi-armed bandit problem where the exploration phase involves costly experiments and occurs before the exploitation phase; and where each play of an arm during the exploration phase updates a prior belief about the arm. The problem of finding an inexpensive exploration strategy to optimize a certain exploitation objective is NP-Hard even when a single play reveals all information about an arm, and all exploration steps cost the same. We provide the first polynomial time constant-factor approximation algorithm for this class of problems. We show that this framework also generalizes several problems of interest studied in the context of data acquisition in sensor networks. Our analyses also extends to switching and setup costs, and to concave utility objectives. Our solution approach is via a novel linear program rounding technique based on stochastic packing. In addition to yielding exploration policies whose performance is within a small constant factor of the adaptive optimal policy, a nice feature of this approach is that the resulting policies explore the arms sequentially without revisiting any arm. Sequentiality is a well-studied concept in decision theory, and is very desirable in domains where multiple explorations can be conducted in parallel, for instance, in the sensor network context.
For a graph $G$ on $n$ vertices, naively sampling the position of a random walk of at time $t$ requires work $Omega(t)$. We desire local access algorithms supporting $text{position}(G,s,t)$ queries, which return the position of a random walk from some start vertex $s$ at time $t$, where the joint distribution of returned positions is $1/text{poly}(n)$ close to the uniform distribution over such walks in $ell_1$ distance. We first give an algorithm for local access to walks on undirected regular graphs with $widetilde{O}(frac{1}{1-lambda}sqrt{n})$ runtime per query, where $lambda$ is the second-largest eigenvalue in absolute value. Since random $d$-regular graphs are expanders with high probability, this gives an $widetilde{O}(sqrt{n})$ algorithm for $G(n,d)$, which improves on the naive method for small numbers of queries. We then prove that no that algorithm with sub-constant error given probe access to random $d$-regular graphs can have runtime better than $Omega(sqrt{n}/log(n))$ per query in expectation, obtaining a nearly matching lower bound. We further show an $Omega(n^{1/4})$ runtime per query lower bound even with an oblivious adversary (i.e. when the query sequence is fixed in advance). We then show that for families of graphs with additional group theoretic structure, dramatically better results can be achieved. We give local access to walks on small-degree abelian Cayley graphs, including cycles and hypercubes, with runtime $text{polylog}(n)$ per query. This also allows for efficient local access to walks on $text{polylog}$ degree expanders. We extend our results to graphs constructed using the tensor product (giving local access to walks on degree $n^epsilon$ graphs for any $epsilon in (0,1]$) and Cartesian product.
We introduce the combinatorial optimization problem Time Disjoint Walks (TDW), which has applications in collision-free routing of discrete objects (e.g., autonomous vehicles) over a network. This problem takes as input a digraph $G$ with positive integer arc lengths, and $k$ pairs of vertices that each represent a trip demand from a source to a destination. The goal is to find a walk and delay for each demand so that no two trips occupy the same vertex at the same time, and so that a min-max or min-sum objective over the trip durations is realized. We focus here on the min-sum variant of Time Disjoint Walks, although most of our results carry over to the min-max case. We restrict our study to various subclasses of DAGs, and observe that there is a sharp complexity boundary between Time Disjoint Walks on oriented stars and on oriented stars with the central vertex replaced by a path. In particular, we present a poly-time algorithm for min-sum and min-max TDW on the former, but show that min-sum TDW on the latter is NP-hard. Our main hardness result is that for DAGs with max degree $Deltaleq3$, min-sum Time Disjoint Walks is APX-hard. We present a natural approximation algorithm for the same class, and provide a tight analysis. In particular, we prove that it achieves an approximation ratio of $Theta(k/log k)$ on bounded-degree DAGs, and $Theta(k)$ on DAGs and bounded-degree digraphs.
Since the elimination algorithm of Fourier and Motzkin, many different methods have been developed for solving linear programs. When analyzing the time complexity of LP algorithms, it is typically either assumed that calculations are performed exactly and bounds are derived on the number of elementary arithmetic operations necessary, or the cost of all arithmetic operations is considered through a bit-complexity analysis. Yet in practice, implementations typically use limited-precision arithmetic. In this paper we introduce the idea of a limited-precision LP oracle and study how such an oracle could be used within a larger framework to compute exact precision solutions to LPs. Under mild assumptions, it is shown that a polynomial number of calls to such an oracle and a polynomial number of bit operations, is sufficient to compute an exact solution to an LP. This work provides a foundation for understanding and analyzing the behavior of the methods that are currently most effective in practice for solving LPs exactly.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا