ترغب بنشر مسار تعليمي؟ اضغط هنا

Sequential Design of Experiments via Linear Programming

369   0   0.0 ( 0 )
 نشر من قبل Kamesh Munagala
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The celebrated multi-armed bandit problem in decision theory models the basic trade-off between exploration, or learning about the state of a system, and exploitation, or utilizing the system. In this paper we study the variant of the multi-armed bandit problem where the exploration phase involves costly experiments and occurs before the exploitation phase; and where each play of an arm during the exploration phase updates a prior belief about the arm. The problem of finding an inexpensive exploration strategy to optimize a certain exploitation objective is NP-Hard even when a single play reveals all information about an arm, and all exploration steps cost the same. We provide the first polynomial time constant-factor approximation algorithm for this class of problems. We show that this framework also generalizes several problems of interest studied in the context of data acquisition in sensor networks. Our analyses also extends to switching and setup costs, and to concave utility objectives. Our solution approach is via a novel linear program rounding technique based on stochastic packing. In addition to yielding exploration policies whose performance is within a small constant factor of the adaptive optimal policy, a nice feature of this approach is that the resulting policies explore the arms sequentially without revisiting any arm. Sequentiality is a well-studied concept in decision theory, and is very desirable in domains where multiple explorations can be conducted in parallel, for instance, in the sensor network context.



قيم البحث

اقرأ أيضاً

Under the Strong Exponential Time Hypothesis, an integer linear program with $n$ Boolean-valued variables and $m$ equations cannot be solved in $c^n$ time for any constant $c < 2$. If the domain of the variables is relaxed to $[0,1]$, the associated linear program can of course be solved in polynomial time. In this work, we give a natural algorithmic bridging between these extremes of $0$-$1$ and linear programming. Specifically, for any subset (finite union of intervals) $E subset [0,1]$ containing ${0,1}$, we give a random-walk based algorithm with runtime $O_E((2-text{measure}(E))^ntext{poly}(n,m))$ that finds a solution in $E^n$ to any $n$-variable linear program with $m$ constraints that is feasible over ${0,1}^n$. Note that as $E$ expands from ${0,1}$ to $[0,1]$, the runtime improves smoothly from $2^n$ to polynomial. Taking $E = [0,1/k) cup (1-1/k,1]$ in our result yields as a corollary a randomized $(2-2/k)^{n}text{poly}(n)$ time algorithm for $k$-SAT. While our approach has some high level resemblance to Sch{o}nings beautiful algorithm, our general algorithm is based on a more sophisticated random walk that incorporates several new ingredients, such as a multiplicative potential to measure progress, a judicious choice of starting distribution, and a time varying distribution for the evolution of the random walk that is itself computed via an LP at each step (a solution to which is guaranteed based on the minimax theorem). Plugging the LP algorithm into our earlier polymorphic framework yields fast exponential algorithms for any CSP (like $k$-SAT, $1$-in-$3$-SAT, NAE $k$-SAT) that admit so-called `threshold partial polymorphisms.
92 - Mohit Singh 2019
We give a characterization result for the integrality gap of the natural linear programming relaxation for the vertex cover problem. We show that integrality gap of the standard linear programming relaxation for any graph G equals $left(2-frac{2}{chi ^f(G)}right)$ where $chi^f(G)$ denotes the fractional chromatic number of G.
Solving linear programs is often a challenging task in distributed settings. While there are good algorithms for solving packing and covering linear programs in a distributed manner (Kuhn et al.~2006), this is essentially the only class of linear pro grams for which such an algorithm is known. In this work we provide a distributed algorithm for solving a different class of convex programs which we call distance-bounded network design convex programs. These can be thought of as relaxations of network design problems in which the connectivity requirement includes a distance constraint (most notably, graph spanners). Our algorithm runs in $O( (D/epsilon) log n)$ rounds in the $mathcal{LOCAL}$ model and finds a $(1+epsilon)$-approximation to the optimal LP solution for any $0 < epsilon leq 1$, where $D$ is the largest distance constraint. While solving linear programs in a distributed setting is interesting in its own right, this class of convex programs is particularly important because solving them is often a crucial step when designing approximation algorithms. Hence we almost immediately obtain new and improved distributed approximation algorithms for a variety of network design problems, including Basic $3$- and $4$-Spanner, Directed $k$-Spanner, Lowest Degree $k$-Spanner, and Shallow-Light Steiner Network Design with a spanning demand graph. Our algorithms do not require any heavy computation and essentially match the best-known centralized approximation algorithms, while previous approaches which do not use heavy computation give approximations which are worse than the best-known centralized bounds.
We show a new way to round vector solutions of semidefinite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by provid ing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSPs). More concretely, we show for every 2-CSP instance I a rounding algorithm for r rounds of the Lasserre SDP hierarchy for I that obtains an integral solution that is at most eps worse than the relaxations value (normalized to lie in [0,1]), as long as r > kcdotrank_{geq theta}(Ins)/poly(e) ;, where k is the alphabet size of I, $theta=poly(e/k)$, and $rank_{geq theta}(Ins)$ denotes the number of eigenvalues larger than $theta$ in the normalized adjacency matrix of the constraint graph of $Ins$. In the case that $Ins$ is a uniquegames instance, the threshold $theta$ is only a polynomial in $e$, and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for emph{every} instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent subexponential algorithm of Arora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khots Unique Games Conjecture. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases $r$ rounds of our program can be evaluated in time $2^{O(r)}poly(n)$.
In this paper we introduce the transductive linear bandit problem: given a set of measurement vectors $mathcal{X}subset mathbb{R}^d$, a set of items $mathcal{Z}subset mathbb{R}^d$, a fixed confidence $delta$, and an unknown vector $theta^{ast}in math bb{R}^d$, the goal is to infer $text{argmax}_{zin mathcal{Z}} z^toptheta^ast$ with probability $1-delta$ by making as few sequentially chosen noisy measurements of the form $x^toptheta^{ast}$ as possible. When $mathcal{X}=mathcal{Z}$, this setting generalizes linear bandits, and when $mathcal{X}$ is the standard basis vectors and $mathcal{Z}subset {0,1}^d$, combinatorial bandits. Such a transductive setting naturally arises when the set of measurement vectors is limited due to factors such as availability or cost. As an example, in drug discovery the compounds and dosages $mathcal{X}$ a practitioner may be willing to evaluate in the lab in vitro due to cost or safety reasons may differ vastly from those compounds and dosages $mathcal{Z}$ that can be safely administered to patients in vivo. Alternatively, in recommender systems for books, the set of books $mathcal{X}$ a user is queried about may be restricted to well known best-sellers even though the goal might be to recommend more esoteric titles $mathcal{Z}$. In this paper, we provide instance-dependent lower bounds for the transductive setting, an algorithm that matches these up to logarithmic factors, and an evaluation. In particular, we provide the first non-asymptotic algorithm for linear bandits that nearly achieves the information theoretic lower bound.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا