ترغب بنشر مسار تعليمي؟ اضغط هنا

Smaller Cuts, Higher Lower Bounds

73   0   0.0 ( 0 )
 نشر من قبل Ami Paz
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proves strong lower bounds for distributed computing in the CONGEST model, by presenting the bit-gadget: a new technique for constructing graphs with small cuts. The contribution of bit-gadgets is twofold. First, developing careful sparse graph constructions with small cuts extends known techniques to show a near-linear lower bound for computing the diameter, a result previously known only for dense graphs. Moreover, the sparseness of the construction plays a crucial role in applying it to approximations of various distance computation problems, drastically improving over what can be obtained when using dense graphs. Second, small cuts are essential for proving super-linear lower bounds, none of which were known prior to this work. In fact, they allow us to show near-quadratic lower bounds for several problems, such as exact minimum vertex cover or maximum independent set, as well as for coloring a graph with its chromatic number. Such strong lower bounds are not limited to NP-hard problems, as given by two simple graph problems in P which are shown to require a quadratic and near-quadratic number of rounds. All of the above are optimal up to logarithmic factors. In addition, in this context, the complexity of the all-pairs-shortest-paths problem is discussed. Finally, it is shown that graph constructions for CONGEST lower bounds translate to lower bounds for the semi-streaming model, despite being very different in its nature.

قيم البحث

اقرأ أيضاً

78 - Calvin Newport 2014
Theoreticians have studied distributed algorithms in the radio network model for close to three decades. A significant fraction of this work focuses on lower bounds for basic communication problems such as wake-up (symmetry breaking among an unknown set of nodes) and broadcast (message dissemination through an unknown network topology). In this paper, we introduce a new technique for proving this type of bound, based on reduction from a probabilistic hitting game, that simplifies and strengthens much of this existing work. In more detail, in this single paper we prove new expected time and high probability lower bounds for wake-up and global broadcast in single and multichann
We study the space complexity of sketching cuts and Laplacian quadratic forms of graphs. We show that any data structure which approximately stores the sizes of all cuts in an undirected graph on $n$ vertices up to a $1+epsilon$ error must use $Omega (nlog n/epsilon^2)$ bits of space in the worst case, improving the $Omega(n/epsilon^2)$ bound of Andoni et al. and matching the best known upper bound achieved by spectral sparsifiers. Our proof is based on a rigidity phenomenon for cut (and spectral) approximation which may be of independent interest: any two $d-$regular graphs which approximate each others cuts significantly better than a random graph approximates the complete graph must overlap in a constant fraction of their edges.
Given a graph $G = (V,E)$, an $(alpha, beta)$-ruling set is a subset $S subseteq V$ such that the distance between any two vertices in $S$ is at least $alpha$, and the distance between any vertex in $V$ and the closest vertex in $S$ is at most $beta$ . We present lower bounds for distributedly computing ruling sets. More precisely, for the problem of computing a $(2, beta)$-ruling set in the LOCAL model, we show the following, where $n$ denotes the number of vertices, $Delta$ the maximum degree, and $c$ is some universal constant independent of $n$ and $Delta$. $bullet$ Any deterministic algorithm requires $Omegaleft(min left{ frac{log Delta}{beta log log Delta} , log_Delta n right} right)$ rounds, for all $beta le c cdot minleft{ sqrt{frac{log Delta}{log log Delta}} , log_Delta n right}$. By optimizing $Delta$, this implies a deterministic lower bound of $Omegaleft(sqrt{frac{log n}{beta log log n}}right)$ for all $beta le c sqrt[3]{frac{log n}{log log n}}$. $bullet$ Any randomized algorithm requires $Omegaleft(min left{ frac{log Delta}{beta log log Delta} , log_Delta log n right} right)$ rounds, for all $beta le c cdot minleft{ sqrt{frac{log Delta}{log log Delta}} , log_Delta log n right}$. By optimizing $Delta$, this implies a randomized lower bound of $Omegaleft(sqrt{frac{log log n}{beta log log log n}}right)$ for all $beta le c sqrt[3]{frac{log log n}{log log log n}}$. For $beta > 1$, this improves on the previously best lower bound of $Omega(log^* n)$ rounds that follows from the 30-year-old bounds of Linial [FOCS87] and Naor [J.Disc.Math.91]. For $beta = 1$, i.e., for the problem of computing a maximal independent set, our results improve on the previously best lower bound of $Omega(log^* n)$ on trees, as our bounds already hold on trees.
We consider an online binary prediction setting where a forecaster observes a sequence of $T$ bits one by one. Before each bit is revealed, the forecaster predicts the probability that the bit is $1$. The forecaster is called well-calibrated if for e ach $p in [0, 1]$, among the $n_p$ bits for which the forecaster predicts probability $p$, the actual number of ones, $m_p$, is indeed equal to $p cdot n_p$. The calibration error, defined as $sum_p |m_p - p n_p|$, quantifies the extent to which the forecaster deviates from being well-calibrated. It has long been known that an $O(T^{2/3})$ calibration error is achievable even when the bits are chosen adversarially, and possibly based on the previous predictions. However, little is known on the lower bound side, except an $Omega(sqrt{T})$ bound that follows from the trivial example of independent fair coin flips. In this paper, we prove an $Omega(T^{0.528})$ bound on the calibration error, which is the first super-$sqrt{T}$ lower bound for this setting to the best of our knowledge. The technical contributions of our work include two lower bound techniques, early stopping and sidestepping, which circumvent the obstacles that have previously hindered strong calibration lower bounds. We also propose an abstraction of the prediction setting, termed the Sign-Preservation game, which may be of independent interest. This game has a much smaller state space than the full prediction setting and allows simpler analyses. The $Omega(T^{0.528})$ lower bound follows from a general reduction theorem that translates lower bounds on the game value of Sign-Preservation into lower bounds on the calibration error.
The prior independent framework for algorithm design considers how well an algorithm that does not know the distribution of its inputs approximates the expected performance of the optimal algorithm for this distribution. This paper gives a method tha t is agnostic to problem setting for proving lower bounds on the prior independent approximation factor of any algorithm. The method constructs a correlated distribution over inputs that can be generated both as a distribution over i.i.d. good-for-algorithms distributions and as a distribution over i.i.d. bad-for-algorithms distributions. Prior independent algorithms are upper-bounded by the optimal algorithm for the latter distribution even when the true distribution is the former. Thus, the ratio of the expected performances of the Bayesian optimal algorithms for these two decompositions is a lower bound on the prior independent approximation ratio. The techniques of the paper connect prior independent algorithm design, Yaos Minimax Principle, and information design. We apply this framework to give new lower bounds on several canonical prior independent mechanism design problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا