ترغب بنشر مسار تعليمي؟ اضغط هنا

A $k$-spanner of a graph $G$ is a sparse subgraph that preserves its shortest path distances up to a multiplicative stretch factor of $k$, and a $k$-emulator is similar but not required to be a subgraph of $G$. A classic theorem by Thorup and Zwick [ JACM 05] shows that, despite the extra flexibility available to emulators, the size/stretch tradeoffs for spanners and emulators are equivalent. Our main result is that this equivalence in tradeoffs no longer holds in the commonly-studied setting of graphs with vertex failures. That is: we introduce a natural definition of vertex fault-tolerant emulators, and then we show a three-way tradeoff between size, stretch, and fault-tolerance for these emulators that polynomially surpasses the tradeoff known to be optimal for spanners. We complement our emulator upper bound with a lower bound construction that is essentially tight (within $log n$ factors of the upper bound) when the stretch is $2k-1$ and $k$ is either a fixed odd integer or $2$. We also show constructions of fault-tolerant emulators with additive error, demonstrating that these also enjoy significantly improved tradeoffs over those available for fault-tolerant additive spanners.
A recent line of research investigates how algorithms can be augmented with machine-learned predictions to overcome worst case lower bounds. This area has revealed interesting algorithmic insights into problems, with particular success in the design of competitive online algorithms. However, the question of improving algorithm running times with predictions has largely been unexplored. We take a first step in this direction by combining the idea of machine-learned predictions with the idea of warm-starting primal-dual algorithms. We consider one of the most important primitives in combinatorial optimization: weighted bipartite matching and its generalization to $b$-matching. We identify three key challenges when using learned dual variables in a primal-dual algorithm. First, predicted duals may be infeasible, so we give an algorithm that efficiently maps predicted infeasible duals to nearby feasible solutions. Second, once the duals are feasible, they may not be optimal, so we show that they can be used to quickly find an optimal solution. Finally, such predictions are useful only if they can be learned, so we show that the problem of learning duals for matching has low sample complexity. We validate our theoretical findings through experiments on both real and synthetic data. As a result we give a rigorous, practical, and empirically effective method to compute bipartite matchings.
Graph cut problems form a fundamental problem type in combinatorial optimization, and are a central object of study in both theory and practice. In addition, the study of fairness in Algorithmic Design and Machine Learning has recently received signi ficant attention, with many different notions proposed and analyzed in a variety of contexts. In this paper we initiate the study of fairness for graph cut problems by giving the first fair definitions for them, and subsequently we demonstrate appropriate algorithmic techniques that yield a rigorous theoretical analysis. Specifically, we incorporate two different definitions of fairness, namely demographic and probabilistic individual fairness, in a particular cut problem modeling disaster containment scenarios. Our results include a variety of approximation algorithms with provable theoretical guarantees.
Recent work has established that, for every positive integer $k$, every $n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges that is resilient to $f$ edge or vertex faults. For vertex faults, this bound is tight. However, the case of edge faults is not as well understood: the best known lower bound for general $k$ is $Omega(f^{frac12 - frac{1}{2k}} n^{1+1/k} +fn)$. Our main result is to nearly close this gap with an improved upper bound, thus separating the cases of edge and vertex faults. For odd $k$, our new upper bound is $O_k(f^{frac12 - frac{1}{2k}} n^{1+1/k} + fn)$, which is tight up to hidden $poly(k)$ factors. For even $k$, our new upper bound is $O_k(f^{1/2} n^{1+1/k} +fn)$, which leaves a gap of $poly(k) f^{1/(2k)}$. Our proof is an analysis of the fault-tolerant greedy algorithm, which requires exponential time, but we also show that there is a polynomial-time algorithm which creates edge fault tolerant spanners that are larger only by factors of $k$.
Recent work has pinned down the existentially optimal size bounds for vertex fault-tolerant spanners: for any positive integer $k$, every $n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges resilient to $f$ vertex faults, and ther e are examples of input graphs on which this bound cannot be improved. However, these proofs work by analyzing the output spanner of a certain exponential-time greedy algorithm. In this work, we give the first algorithm that produces vertex fault tolerant spanners of optimal size and which runs in polynomial time. Specifically, we give a randomized algorithm which takes $widetilde{O}left( f^{1-1/k} n^{2+1/k} + mf^2right)$ time. We also derandomize our algorithm to give a deterministic algorithm with similar bounds. This reflects an exponential improvement in runtime over [Bodwin-Patel PODC 19], the only previously known algorithm for constructing optimal vertex fault-tolerant spanners.
It was recently shown that a version of the greedy algorithm gives a construction of fault-tolerant spanners that is size-optimal, at least for vertex faults. However, the algorithm to construct this spanner is not polynomial-time, and the best-known polynomial time algorithm is significantly suboptimal. Designing a polynomial-time algorithm to construct (near-)optimal fault-tolerant spanners was given as an explicit open problem in the two most recent papers on fault-tolerant spanners ([Bodwin, Dinitz, Parter, Vassilevka Williams SODA 18] and [Bodwin, Patel PODC 19]). We give a surprisingly simple algorithm which runs in polynomial time and constructs fault-tolerant spanners that are extremely close to optimal (off by only a linear factor in the stretch) by modifying the greedy algorithm to run in polynomial time. To complement this result, we also give simple distributed constructions in both the LOCAL and CONGEST models.
New optical technologies offer the ability to reconfigure network topologies dynamically, rather than setting them once and for all. This is true in both optical wide area networks (optical WANs) and in datacenters, despite the many differences betwe en these two settings. Because of these new technologies, there has been a surge of both practical and theoretical research on algorithms to take advantage of them. In particular, Jia et al. [INFOCOM 17] designed online scheduling algorithms for dynamically reconfigurable topologies for both the makespan and sum of completion times objectives. In this paper, we work in the same setting but study an objective that is more meaningful in an online setting: the sum of flow times. The flow time of a job is the total amount of time that it spends in the system, which may be considerably smaller than its completion time if it is released late. We provide competitive algorithms for the online setting with speed augmentation, and also give a lower bound proving that speed augmentation is in fact necessary. As a side effect of our techniques, we also improve and generalize the results of Jia et al. on completion times by giving an $O(1)$-competitive algorithm for arbitrary sizes and release times even when nodes have different degree bounds, and moreover allow for the weighted sum of completion times (or flow times).
We study three capacity problems in the mobile telephone model, a network abstraction that models the peer-to-peer communication capabilities implemented in most commodity smartphone operating systems. The capacity of a network expresses how much sus tained throughput can be maintained for a set of communication demands, and is therefore a fundamental bound on the usefulness of a network. Because of this importance, wireless network capacity has been active area of research for the last two decades. The three capacity problems that we study differ in the structure of the communication demands. The first problem is pairwise capacity, where the demands are (source, destination) pairs. Pairwise capacity is one of the most classical definitions, as it was analyzed in the seminal paper of Gupta and Kumar on wireless network capacity. The second problem we study is broadcast capacity, in which a single source must deliver packets to all other nodes in the network. Finally, we turn our attention to all-to-all capacity, in which all nodes must deliver packets to all other nodes. In all three of these problems we characterize the optimal achievable throughput for any given network, and design algorithms which asymptotically match this performance. We also study these problems in networks generated randomly by a process introduced by Gupta and Kumar, and fully characterize their achievable throughput. Interestingly, the techniques that we develop for all-to-all capacity also allow us to design a one-shot gossip algorithm that runs within a polylogarithmic factor of optimal in every graph. This largely resolves an open question from previous work on the one-shot gossip problem in this model.
There has been significant recent progress on algorithms for approximating graph spanners, i.e., algorithms which approximate the best spanner for a given input graph. Essentially all of these algorithms use the same basic LP relaxation, so a variety of papers have studied the limitations of this approach and proved integrality gaps for this LP in a variety of settings. We extend these results by showing that even the strongest lift-and-project methods cannot help significantly, by proving polynomial integrality gaps even for $n^{Omega(epsilon)}$ levels of the Lasserre hierarchy, for both the directed and undirected spanner problems. We also extend these integrality gaps to related problems, notably Directed Steiner Network and Shallow-Light Steiner Network.
A $t$-spanner of a graph $G$ is a subgraph $H$ in which all distances are preserved up to a multiplicative $t$ factor. A classical result of Althofer et al. is that for every integer $k$ and every graph $G$, there is a $(2k-1)$-spanner of $G$ with at most $O(n^{1+1/k})$ edges. But for some settings the more interesting notion is not the number of edges, but the degrees of the nodes. This spurred interest in and study of spanners with small maximum degree. However, this is not necessarily a robust enough objective: we would like spanners that not only have small maximum degree, but also have few nodes of large degree. To interpolate between these two extremes, in this paper we initiate the study of graph spanners with respect to the $ell_p$-norm of their degree vector, thus simultaneously modeling the number of edges (the $ell_1$-norm) and the maximum degree (the $ell_{infty}$-norm). We give precise upper bounds for all ranges of $p$ and stretch $t$: we prove that the greedy $(2k-1)$-spanner has $ell_p$ norm of at most $max(O(n), O(n^{(k+p)/(kp)}))$, and that this bound is tight (assuming the ErdH{o}s girth conjecture). We also study universal lower bounds, allowing us to give generic guarantees on the approximation ratio of the greedy algorithm which generalize and interpolate between the known approximations for the $ell_1$ and $ell_{infty}$ norm. Finally, we show that at least in some situations, the $ell_p$ norm behaves fundamentally differently from $ell_1$ or $ell_{infty}$: there are regimes ($p=2$ and stretch $3$ in particular) where the greedy spanner has a provably superior approximation to the generic guarantee.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا