No Arabic abstract
The priority model was introduced by Borodin, Rackoff, and Nielsen (2003) to capture greedy-like algorithms. Motivated by the success of advice complexity in the area of online algorithms, Borodin et al. (2020) extended the fixed priority model to include an advice tape oracle. They also developed a reduction-based framework for proving lower bounds on the amount of advice required to achieve certain approximation ratios in this rather powerful model. In order to capture most of the algorithms that are considered greedy-like, the even stronger model of adaptive priority algorithms is needed. We extend the adaptive priority model to include an advice tape oracle. We show how to modify the reduction-based framework from the fixed priority case, making it applicable to the more powerful adaptive priority algorithms. The framework provides a template, where one can obtain a lower bound relatively easily by exhibiting gadget patterns fulfilling given criteria. In the process, we simplify the proof that the framework works, and we strengthen all the earlier lower bounds by a factor two. As a motivating example, we present a purely combinatorial adaptive priority algorithm with advice for Minimum Vertex Cover on triangle-free graphs of maximum degree three. Our algorithm achieves optimality and uses at most 7n/22 bits of advice. Known results imply that no adaptive priority algorithm without advice can achieve optimality without advice, and we prove that 7n/22 is fewer bits than an online algorithm with advice needs to reach optimality. Furthermore, we show connections between exact algorithms and priority algorithms with advice. Priority algorithms with advice that achieve optimality can be used to define corresponding exact algorithms, priority exact algorithms. The lower bound templates for advice-based adaptive algorithms imply lower bounds on exact algorithms designed in this way.
The priority model of greedy-like algorithms was introduced by Borodin, Nielsen, and Rackoff in 2002. We augment this model by allowing priority algorithms to have access to advice, i.e., side information precomputed by an all-powerful oracle. Obtaining lower bounds in the priority model without advice can be challenging and may involve intricate adversary arguments. Since the priority model with advice is even more powerful, obtaining lower bounds presents additional difficulties. We sidestep these difficulties by developing a general framework of reductions which makes lower bound proofs relatively straightforward and routine. We start by introducing the Pair Matching problem, for which we are able to prove strong lower bounds in the priority model with advice. We develop a template for constructing a reduction from Pair Matching to other problems in the priority model with advice -- this part is technically challenging since the reduction needs to define a valid priority function for Pair Matching while respecting the priority function for the other problem. Finally, we apply the template to obtain lower bounds for a number of standard discrete optimization problems.
The fragile complexity of a comparison-based algorithm is $f(n)$ if each input element participates in $O(f(n))$ comparisons. In this paper, we explore the fragile complexity of algorithms adaptive to various restrictions on the input, i.e., algorithms with a fragile complexity parameterized by a quantity other than the input size n. We show that searching for the predecessor in a sorted array has fragile complexity ${Theta}(log k)$, where $k$ is the rank of the query element, both in a randomized and a deterministic setting. For predecessor searches, we also show how to optimally reduce the amortized fragile complexity of the elements in the array. We also prove the following results: Selecting the $k$-th smallest element has expected fragile complexity $O(log log k)$ for the element selected. Deterministically finding the minimum element has fragile complexity ${Theta}(log(Inv))$ and ${Theta}(log(Runs))$, where $Inv$ is the number of
In the Priority Steiner Tree (PST) problem, we are given an undirected graph $G=(V,E)$ with a source $s in V$ and terminals $T subseteq V setminus {s}$, where each terminal $v in T$ requires a nonnegative priority $P(v)$. The goal is to compute a minimum weight Steiner tree containing edges of varying rates such that the path from $s$ to each terminal $v$ consists of edges of rate greater than or equal to $P(v)$. The PST problem with $k$ priorities admits a $min{2 ln |T| + 2, krho}$-approximation [Charikar et al., 2004], and is hard to approximate with ratio $c log log n$ for some constant $c$ [Chuzhoy et al., 2008]. In this paper, we first strengthen the analysis provided by [Charikar et al., 2004] for the $(2 ln |T| + 2)$-approximation to show an approximation ratio of $lceil log_2 |T| rceil + 1 le 1.443 ln |T| + 2$, then provide a very simple, parallelizable algorithm which achieves the same approximation ratio. We then consider a more difficult node-weighted version of the PST problem, and provide a $(2 ln |T|+2)$-approximation using extensions of the spider decomposition by [Klein & Ravi, 1995]. This is the first result for the PST problem in node-weighted graphs. Moreover, the approximation ratios for all above algorithms are tight.
The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the emph{adaptive complexity}, capturing the number of sequential rounds of parallel computation needed. In this work we obtain the first emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with emph{near-optimal} $O(log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: one needs to account for the total number of function evaluations (or value queries) as well. Our algorithm asks $tilde{O}(n^2)$ value queries, but can be modified to run with only $tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(log^2n)$. Besides the above improvement in adaptivity, this is also the first emph{combinatorial} approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives. Finally, we showcase our algorithms applicability on real-world datasets.
We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparison-based algorithms as the maximal number of comparisons any individual element takes part in. We give a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straight-forward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible.