ترغب بنشر مسار تعليمي؟ اضغط هنا

Round-Competitive Algorithms for Uncertainty Problems with Parallel Queries

140   0   0.0 ( 0 )
 نشر من قبل Murilo Santos de Lima
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The area of computing with uncertainty considers problems where some information about the input elements is uncertain, but can be obtained using queries. For example, instead of the weight of an element, we may be given an interval that is guaranteed to contain the weight, and a query can be performed to reveal the weight. While previous work has considered models where queries are asked either sequentially (adaptive model) or all at once (non-adaptive model), and the goal is to minimize the number of queries that are needed to solve the given problem, we propose and study a new model where $k$ queries can be made in parallel in each round, and the goal is to minimize the number of query rounds. We use competitive analysis and present upper and lower bounds on the number of query rounds required by any algorithm in comparison with the optimal number of query rounds. Given a set of uncertain elements and a family of $m$ subsets of that set, we present an algorithm for determining the value of the minimum of each of the subsets that requires at most $(2+varepsilon) cdot mathrm{opt}_k+mathrm{O}left(frac{1}{varepsilon} cdot lg mright)$ rounds for every $0<varepsilon<1$, where $mathrm{opt}_k$ is the optimal number of rounds, as well as nearly matching lower bounds. For the problem of determining the $i$-th smallest value and identifying all elements with that value in a set of uncertain elements, we give a $2$-round-competitive algorithm. We also show that the problem of sorting a family of sets of uncertain elements admits a $2$-round-competitive algorithm and this is the best possible.



قيم البحث

اقرأ أيضاً

For over a decade now we have been witnessing the success of {em massive parallel computation} (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the {em maximum matching} problem---one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in $O(log{n})$ rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has $n^{1+Omega(1)}$ memory, this problem can also be solved $2$-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly $O(log{n})$ rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that perplexing possibility. That is, we break the above $O(log n)$ round complexity bound even in the case of {em slightly sublinear} memory per machine. In fact, our improvement here is {em almost exponential}: we are able to deliver a $(2+epsilon)$-approximation to maximum matching, for any fixed constant $epsilon>0$, in $O((log log n)^2)$ rounds.
Let $G$ be an $n$-vertex graph with $m$ edges. When asked a subset $S$ of vertices, a cut query on $G$ returns the number of edges of $G$ that have exactly one endpoint in $S$. We show that there is a bounded-error quantum algorithm that determines a ll connected components of $G$ after making $O(log(n)^6)$ many cut queries. In contrast, it follows from results in communication complexity that any randomized algorithm even just to decide whether the graph is connected or not must make at least $Omega(n/log(n))$ many cut queries. We further show that with $O(log(n)^8)$ many cut queries a quantum algorithm can with high probability output a spanning forest for $G$. En route to proving these results, we design quantum algorithms for learning a graph using cut queries. We show that a quantum algorithm can learn a graph with maximum degree $d$ after $O(d log(n)^2)$ many cut queries, and can learn a general graph with $O(sqrt{m} log(n)^{3/2})$ many cut queries. These two upper bounds are tight up to the poly-logarithmic factors, and compare to $Omega(dn)$ and $Omega(m/log(n))$ lower bounds on the number of cut queries needed by a randomized algorithm for the same problems, respectively. The key ingredients in our results are the Bernstein-Vazirani algorithm, approximate counting with OR queries, and learning sparse vectors from inner products as in compressed sensing.
We study the problem of sorting under incomplete information, when queries are used to resolve uncertainties. Each of $n$ data items has an unknown value, which is known to lie in a given interval. We can pay a query cost to learn the actual value, a nd we may allow an error threshold in the sorting. The goal is to find a nearly-sorted permutation by performing a minimum-cost set of queries. We show that an offline optimum query set can be found in poly time, and that both oblivious and adaptive problems have simple competitive algorithms. The competitive ratio for the oblivious problem is $n$ for uniform query costs, and unbounded for arbitrary costs; for the adaptive problem, the ratio is 2. We present a unified adaptive strategy for uniform costs that yields the following improved results: (1) a 3/2-competitive randomized algorithm; (2) a 5/3-competitive deterministic algorithm if the dependency graph has no 2-components after some preprocessing, which has competitive ratio $3/2+mathrm{O}(1/k)$ if the components obtained have size at least $k$; and (3) an exact algorithm for laminar families of intervals. The first two results have matching lower bounds, and we have a lower bound of 7/5 for large components. We also give a randomized adaptive algorithm with competitive ratio $1+frac{4}{3sqrt{3}}approx 1.7698$ for arbitrary query costs, and we show that the 2-competitive deterministic adaptive algorithm can be generalized for queries returning intervals and for a more general vertex cover problem, by using the local ratio technique. Moreover, we prove that the advice complexity of the adaptive problem is $lfloor n/2rfloor$ if no error threshold is allowed, and $lceil n/3cdotlg 3rceil$ for the general case. Finally, we present some graph-theoretical results on co-threshold tolerance graphs, and we discuss uncertainty variants of some classical interval problems.
We consider the following stochastic matching problem on both weighted and unweighted graphs: A graph $G(V, E)$ along with a parameter $p in (0, 1)$ is given in the input. Each edge of $G$ is realized independently with probability $p$. The goal is t o select a degree bounded (dependent only on $p$) subgraph $H$ of $G$ such that the expected size/weight of maximum realized matching of $H$ is close to that of $G$. This model of stochastic matching has attracted significant attention over the recent years due to its various applications. The most fundamental open question is the best approximation factor achievable for such algorithms that, in the literature, are referred to as non-adaptive algorithms. Prior work has identified breaking (near) half-approximation as a barrier for both weighted and unweighted graphs. Our main results are as follows: -- We analyze a simple and clean algorithm and show that for unweighted graphs, it finds an (almost) $4sqrt{2}-5$ ($approx 0.6568$) approximation by querying $O(frac{log (1/p)}{p})$ edges per vertex. This improves over the state-of-the-art $0.5001$ approximate algorithm of Assadi et al. [EC17]. -- We show that the same algorithm achieves a $0.501$ approximation for weighted graphs by querying $O(frac{log (1/p)}{p})$ edges per vertex. This is the first algorithm to break $0.5$ approximation barrier for weighted graphs. It also improves the per-vertex queries of the state-of-the-art by Yamaguchi and Maehara [SODA18] and Behnezhad and Reyhani [EC18]. Our algorithms are fundamentally different from prior works, yet are very simple and natural. For the analysis, we introduce a number of procedures that construct heavy fractional matchings. We consider the new algorithms and our analytical tools to be the main contributions of this paper.
This paper presents universal algorithms for clustering problems, including the widely studied $k$-median, $k$-means, and $k$-center objectives. The input is a metric space containing all potential client locations. The algorithm must select $k$ clus ter centers such that they are a good solution for any subset of clients that actually realize. Specifically, we aim for low regret, defined as the maximum over all subsets of the difference between the cost of the algorithms solution and that of an optimal solution. A universal algorithms solution $SOL$ for a clustering problem is said to be an $(alpha, beta)$-approximation if for all subsets of clients $C$, it satisfies $SOL(C) leq alpha cdot OPT(C) + beta cdot MR$, where $OPT(C)$ is the cost of the optimal solution for clients $C$ and $MR$ is the minimum regret achievable by any solution. Our main results are universal algorithms for the standard clustering objectives of $k$-median, $k$-means, and $k$-center that achieve $(O(1), O(1))$-approximations. These results are obtained via a novel framework for universal algorithms using linear programming (LP) relaxations. These results generalize to other $ell_p$-objectives and the setting where some subset of the clients are fixed. We also give hardness results showing that $(alpha, beta)$-approximation is NP-hard if $alpha$ or $beta$ is at most a certain constant, even for the widely studied special case of Euclidean metric spaces. This shows that in some sense, $(O(1), O(1))$-approximation is the strongest type of guarantee obtainable for universal clustering.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا