Do you want to publish a course? Click here

Incremental Edge Orientation in Forests

74   0   0.0 ( 0 )
 Added by William Kuszmaul
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

For any forest $G = (V, E)$ it is possible to orient the edges $E$ so that no vertex in $V$ has out-degree greater than $1$. This paper considers the incremental edge-orientation problem, in which the edges $E$ arrive over time and the algorithm must maintain a low-out-degree edge orientation at all times. We give an algorithm that maintains a maximum out-degree of $3$ while flipping at most $O(log log n)$ edge orientations per edge insertion, with high probability in $n$. The algorithm requires worst-case time $O(log n log log n)$ per insertion, and takes amortized time $O(1)$. The previous state of the art required up to $O(log n / log log n)$ edge flips per insertion. We then apply our edge-orientation results to the problem of dynamic Cuckoo hashing. The problem of designing simple families $mathcal{H}$ of hash functions that are compatible with Cuckoo hashing has received extensive attention. These families $mathcal{H}$ are known to satisfy emph{static guarantees}, but do not come typically with emph{dynamic guarantees} for the running time of inserts and deletes. We show how to transform static guarantees (for $1$-associativity) into near-state-of-the-art dynamic guarantees (for $O(1)$-associativity) in a black-box fashion. Rather than relying on the family $mathcal{H}$ to supply randomness, as in past work, we instead rely on randomness within our table-maintenance algorithm.



rate research

Read More

In this paper we show that many sequential randomized incremental algorithms are in fact parallel. We consider algorithms for several problems including Delaunay triangulation, linear programming, closest pair, smallest enclosing disk, least-element lists, and strongly connected components. We analyze the dependences between iterations in an algorithm, and show that the dependence structure is shallow with high probability, or that by violating some dependences the structure is shallow and the work is not increased significantly. We identify three types of algorithms based on their dependences and present a framework for analyzing each type. Using the framework gives work-efficient polylogarithmic-depth parallel algorithms for most of the problems that we study. This paper shows the first incremental Delaunay triangulation algorithm with optimal work and polylogarithmic depth, which is an open problem for over 30 years. This result is important since most implementations of parallel Delaunay triangulation use the incremental approach. Our results also improve bounds on strongly connected components and least-elements lists, and significantly simplify parallel algorithms for several problems.
Data is continuously generated by modern data sources, and a recent challenge in machine learning has been to develop techniques that perform well in an incremental (streaming) setting. In this paper, we investigate the problem of private machine learning, where as common in practice, the data is not given at once, but rather arrives incrementally over time. We introduce the problems of private incremental ERM and private incremental regression where the general goal is to always maintain a good empirical risk minimizer for the history observed under differential privacy. Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals. We take this construction as a baseline for comparison. We then provide two mechanisms for the private incremental regression problem. Our first mechanism is based on privately constructing a noisy incremental gradient function, which is then used in a modified projected gradient procedure at every timestep. This mechanism has an excess empirical risk of $approxsqrt{d}$, where $d$ is the dimensionality of the data. While from the results of [Bassily et al. 2014] this bound is tight in the worst-case, we show that certain geometric properties of the input and constraint set can be used to derive significantly better results for certain interesting regression problems.
We present online algorithms for directed spanners and Steiner forests. These problems fall under the unifying framework of online covering linear programming formulations, developed by Buchbinder and Naor (MOR, 34, 2009), based on primal-dual techniques. Our results include the following: For the pairwise spanner problem, in which the pairs of vertices to be spanned arrive online, we present an efficient randomized $tilde{O}(n^{4/5})$-competitive algorithm for graphs with general lengths, where $n$ is the number of vertices. With uniform lengths, we give an efficient randomized $tilde{O}(n^{2/3+epsilon})$-competitive algorithm, and an efficient deterministic $tilde{O}(k^{1/2+epsilon})$-competitive algorithm, where $k$ is the number of terminal pairs. These are the first online algorithms for directed spanners. In the offline setting, the current best approximation ratio with uniform lengths is $tilde{O}(n^{3/5 + epsilon})$, due to Chlamtac, Dinitz, Kortsarz, and Laekhanukit (TALG 2020). For the directed Steiner forest problem with uniform costs, in which the pairs of vertices to be connected arrive online, we present an efficient randomized $tilde{O}(n^{2/3 + epsilon})$-competitive algorithm. The state-of-the-art online algorithm for general costs is due to Chakrabarty, Ene, Krishnaswamy, and Panigrahi (SICOMP 2018) and is $tilde{O}(k^{1/2 + epsilon})$-competitive. In the offline version, the current best approximation ratio with uniform costs is $tilde{O}(n^{26/45 + epsilon})$, due to Abboud and Bodwin (SODA 2018). A small modification of the online covering framework by Buchbinder and Naor implies a polynomial-time primal-dual approach with separation oracles, which a priori might perform exponentially many calls. We convert the online spanner problem and the online Steiner forest problem into online covering problems and round in a problem-specific fashion.
122 - Roy Schwartz , Ran Yeheskel 2021
Motivated by the classic Generalized Assignment Problem, we consider the Graph Balancing problem in the presence of orientation costs: given an undirected multi-graph G = (V,E) equipped with edge weights and orientation costs on the edges, the goal is to find an orientation of the edges that minimizes both the maximum weight of edges oriented toward any vertex (makespan) and total orientation cost. We present a general framework for minimizing makespan in the presence of costs that allows us to: (1) achieve bicriteria approximations for the Graph Balancing problem that capture known previous results (Shmoys-Tardos [Math. Progrm. 93], Ebenlendr-Krcal- Sgall [Algorithmica 14], and Wang-Sitters [Inf. Process. Lett. 16]); and (2) achieve bicriteria approximations for extensions of the Graph Balancing problem that admit hyperedges and unrelated weights. Our framework is based on a remarkably simple rounding of a strengthened linear relaxation. We complement the above by presenting bicriteria lower bounds with respect to the linear programming relaxations we use that show that a loss in the total orientation cost is required if one aims for an approximation better than 2 in the makespan.
We introduce and study a discrete multi-period extension of the classical knapsack problem, dubbed generalized incremental knapsack. In this setting, we are given a set of $n$ items, each associated with a non-negative weight, and $T$ time periods with non-decreasing capacities $W_1 leq dots leq W_T$. When item $i$ is inserted at time $t$, we gain a profit of $p_{it}$; however, this item remains in the knapsack for all subsequent periods. The goal is to decide if and when to insert each item, subject to the time-dependent capacity constraints, with the objective of maximizing our total profit. Interestingly, this setting subsumes as special cases a number of recently-studied incremental knapsack problems, all known to be strongly NP-hard. Our first contribution comes in the form of a polynomial-time $(frac{1}{2}-epsilon)$-approximation for the generalized incremental knapsack problem. This result is based on a reformulation as a single-machine sequencing problem, which is addressed by blending dynamic programming techniques and the classical Shmoys-Tardos algorithm for the generalized assignment problem. Combined with further enumeration-based self-reinforcing ideas and newly-revealed structural properties of nearly-optimal solutions, we turn our basic algorithm into a quasi-polynomial time approximation scheme (QPTAS). Hence, under widely believed complexity assumptions, this finding rules out the possibility that generalized incremental knapsack is APX-hard.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا