ترغب بنشر مسار تعليمي؟ اضغط هنا

Fragile Complexity of Adaptive Algorithms

123   0   0.0 ( 0 )
 نشر من قبل Pilar Cano
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The fragile complexity of a comparison-based algorithm is $f(n)$ if each input element participates in $O(f(n))$ comparisons. In this paper, we explore the fragile complexity of algorithms adaptive to various restrictions on the input, i.e., algorithms with a fragile complexity parameterized by a quantity other than the input size n. We show that searching for the predecessor in a sorted array has fragile complexity ${Theta}(log k)$, where $k$ is the rank of the query element, both in a randomized and a deterministic setting. For predecessor searches, we also show how to optimally reduce the amortized fragile complexity of the elements in the array. We also prove the following results: Selecting the $k$-th smallest element has expected fragile complexity $O(log log k)$ for the element selected. Deterministically finding the minimum element has fragile complexity ${Theta}(log(Inv))$ and ${Theta}(log(Runs))$, where $Inv$ is the number of



قيم البحث

اقرأ أيضاً

We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparison-based algorithms as the maximal number of comparisons any individual element takes part in. We g ive a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straight-forward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible.
The priority model was introduced by Borodin, Rackoff, and Nielsen (2003) to capture greedy-like algorithms. Motivated by the success of advice complexity in the area of online algorithms, Borodin et al. (2020) extended the fixed priority model to in clude an advice tape oracle. They also developed a reduction-based framework for proving lower bounds on the amount of advice required to achieve certain approximation ratios in this rather powerful model. In order to capture most of the algorithms that are considered greedy-like, the even stronger model of adaptive priority algorithms is needed. We extend the adaptive priority model to include an advice tape oracle. We show how to modify the reduction-based framework from the fixed priority case, making it applicable to the more powerful adaptive priority algorithms. The framework provides a template, where one can obtain a lower bound relatively easily by exhibiting gadget patterns fulfilling given criteria. In the process, we simplify the proof that the framework works, and we strengthen all the earlier lower bounds by a factor two. As a motivating example, we present a purely combinatorial adaptive priority algorithm with advice for Minimum Vertex Cover on triangle-free graphs of maximum degree three. Our algorithm achieves optimality and uses at most 7n/22 bits of advice. Known results imply that no adaptive priority algorithm without advice can achieve optimality without advice, and we prove that 7n/22 is fewer bits than an online algorithm with advice needs to reach optimality. Furthermore, we show connections between exact algorithms and priority algorithms with advice. Priority algorithms with advice that achieve optimality can be used to define corresponding exact algorithms, priority exact algorithms. The lower bound templates for advice-based adaptive algorithms imply lower bounds on exact algorithms designed in this way.
The priority model of greedy-like algorithms was introduced by Borodin, Nielsen, and Rackoff in 2002. We augment this model by allowing priority algorithms to have access to advice, i.e., side information precomputed by an all-powerful oracle. Obtain ing lower bounds in the priority model without advice can be challenging and may involve intricate adversary arguments. Since the priority model with advice is even more powerful, obtaining lower bounds presents additional difficulties. We sidestep these difficulties by developing a general framework of reductions which makes lower bound proofs relatively straightforward and routine. We start by introducing the Pair Matching problem, for which we are able to prove strong lower bounds in the priority model with advice. We develop a template for constructing a reduction from Pair Matching to other problems in the priority model with advice -- this part is technically challenging since the reduction needs to define a valid priority function for Pair Matching while respecting the priority function for the other problem. Finally, we apply the template to obtain lower bounds for a number of standard discrete optimization problems.
The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the emph{adaptive complexity}, capturing the number of sequential rou nds of parallel computation needed. In this work we obtain the first emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with emph{near-optimal} $O(log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: one needs to account for the total number of function evaluations (or value queries) as well. Our algorithm asks $tilde{O}(n^2)$ value queries, but can be modified to run with only $tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(log^2n)$. Besides the above improvement in adaptivity, this is also the first emph{combinatorial} approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives. Finally, we showcase our algorithms applicability on real-world datasets.
In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $xinmathbb{R}^n$ from $m$ linear measurements $A_1 x, A_2 x,ldots, A_m x$, where $A_i$ can be chosen based on the outcomes $A_1 x,ldots, A_{i-1} x $ of previous measurements. The goal is to output a vector $hat{x}$ for which $$|x-hat{x}|_p le C cdot min_{ktext{-sparse } x} |x-x|_q,$$ with probability at least $2/3$, where $C > 0$ is an approximation factor. Indyk, Price and Woodruff (FOCS11) gave an algorithm for $p=q=2$ for $C = 1+epsilon$ with $Oh((k/epsilon) loglog (n/k))$ measurements and $Oh(log^*(k) loglog (n))$ rounds of adaptivity. We first improve their bounds, obtaining a scheme with $Oh(k cdot loglog (n/k) +(k/epsilon) cdot loglog(1/epsilon))$ measurements and $Oh(log^*(k) loglog (n))$ rounds, as well as a scheme with $Oh((k/epsilon) cdot loglog (nlog (n/k)))$ measurements and an optimal $Oh(loglog (n))$ rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for $(p,p)$ for every $0 < p < 2$. We show that the improvement from $O(k log(n/k))$ measurements to $O(k log log (n/k))$ measurements in the adaptive setting can persist with a better $epsilon$-dependence for other values of $p$ and $q$. For example, when $(p,q) = (1,1)$, we obtain $O(frac{k}{sqrt{epsilon}} cdot log log n log^3 (frac{1}{epsilon}))$ measurements.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا