ترغب بنشر مسار تعليمي؟ اضغط هنا

Separating Hierarchical and General Hub Labelings

216   0   0.0 ( 0 )
 نشر من قبل Ilya Razenshteyn
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the context of distance oracles, a labeling algorithm computes vertex labels during preprocessing. An $s,t$ query computes the corresponding distance from the labels of $s$ and $t$ only, without looking at the input graph. Hub labels is a class of labels that has been extensively studied. Performance of the hub label query depends on the label size. Hierarchical labels are a natural special kind of hub labels. These labels are related to other problems and can be computed more efficiently. This brings up a natural question of the quality of hierarchical labels. We show that there is a gap: optimal hierarchical labels can be polynomially bigger than the general hub labels. To prove this result, we give tight upper and lower bounds on the size of hierarchical and general labels for hypercubes.



قيم البحث

اقرأ أيضاً

A rich line of work has been addressing the computational complexity of locally checkable labelings (LCLs), illustrating the landscape of possible complexities. In this paper, we study the landscape of LCL complexities under bandwidth restrictions. O ur main results are twofold. First, we show that on trees, the CONGEST complexity of an LCL problem is asymptotically equal to its complexity in the LOCAL model. An analog statement for general (non-LCL) problems is known to be false. Second, we show that for general graphs this equivalence does not hold, by providing an LCL problem for which we show that it can be solved in $O(log n)$ rounds in the LOCAL model, but requires $tilde{Omega}(n^{1/2})$ rounds in the CONGEST model.
Hub Labeling (HL) is a data structure for distance oracles. Hierarchical HL (HHL) is a special type of HL, that received a lot of attention from a practical point of view. However, theoretical questions such as NP-hardness and approximation guarantee for HHL algorithms have been left aside. In this paper we study HL and HHL from the complexity theory point of view. We prove that both HL and HHL are NP-hard, and present upper and lower bounds for the approximation ratios of greedy HHL algorithms used in practice. We also introduce a new variant of the greedy HHL algorithm and a proof that it produces small labels for graphs with small highway dimension.
We consider graph properties that can be checked from labels, i.e., bit sequences, of logarithmic length attached to vertices. We prove that there exists such a labeling for checking a first-order formula with free set variables in the graphs of ever y class that is emph{nicely locally cwd-decomposable}. This notion generalizes that of a emph{nicely locally tree-decomposable} class. The graphs of such classes can be covered by graphs of bounded emph{clique-width} with limited overlaps. We also consider such labelings for emph{bounded} first-order formulas on graph classes of emph{bounded expansion}. Some of these results are extended to counting queries.
We present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space. This is the first separation betwe en oblivious streaming and adversarially-robust streaming, and resolves one of the central open questions in adversarial robust streaming.
Recently, Hierarchical Clustering (HC) has been considered through the lens of optimization. In particular, two maximization objectives have been defined. Moseley and Wang defined the emph{Revenue} objective to handle similarity information given by a weighted graph on the data points (w.l.o.g., $[0,1]$ weights), while Cohen-Addad et al. defined the emph{Dissimilarity} objective to handle dissimilarity information. In this paper, we prove structural lemmas for both objectives allowing us to convert any HC tree to a tree with constant number of internal nodes while incurring an arbitrarily small loss in each objective. Although the best-known approximations are 0.585 and 0.667 respectively, using our lemmas we obtain approximations arbitrarily close to 1, if not all weights are small (i.e., there exist constants $epsilon, delta$ such that the fraction of weights smaller than $delta$, is at most $1 - epsilon$); such instances encompass many metric-based similarity instances, thereby improving upon prior work. Finally, we introduce Hierarchical Correlation Clustering (HCC) to handle instances that contain similarity and dissimilarity information simultaneously. For HCC, we provide an approximation of 0.4767 and for complementary similarity/dissimilarity weights (analogous to $+/-$ correlation clustering), we again present nearly-optimal approximations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا