ترغب بنشر مسار تعليمي؟ اضغط هنا

On Online Labeling with Polynomially Many Labels

68   0   0.0 ( 0 )
 نشر من قبل Jan Bul\\'anek
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the online labeling problem with parameters n and m we are presented with a sequence of n keys from a totally ordered universe U and must assign each arriving key a label from the label set {1,2,...,m} so that the order of labels (strictly) respects the ordering on U. As new keys arrive it may be necessary to change the labels of some items; such changes may be done at any time at unit cost for each change. The goal is to minimize the total cost. An alternative formulation of this problem is the file maintenance problem, in which the items, instead of being labeled, are maintained in sorted order in an array of length m, and we pay unit cost for moving an item. For the case m=cn for constant c>1, there are known algorithms that use at most O(n log(n)^2) relabelings in total [Itai, Konheim, Rodeh, 1981], and it was shown recently that this is asymptotically optimal [Bulanek, Koucky, Saks, 2012]. For the case of m={Theta}(n^C) for C>1, algorithms are known that use O(n log n) relabelings. A matching lower bound was claimed in [Dietz, Seiferas, Zhang, 2004]. That proof involved two distinct steps: a lower bound for a problem they call prefix bucketing and a reduction from prefix bucketing to online labeling. The reduction seems to be incorrect, leaving a (seemingly significant) gap in the proof. In this paper we close the gap by presenting a correct reduction to prefix bucketing. Furthermore we give a simplified and improved analysis of the prefix bucketing lower bound. This improvement allows us to extend the lower bounds for online labeling to the case where the number m of labels is superpolynomial in n. In particular, for superpolynomial m we get an asymptotically optimal lower bound {Omega}((n log n) / (log log m - log log n)).

قيم البحث

اقرأ أيضاً

We consider the file maintenance problem (also called the online labeling problem) in which n integer items from the set {1,...,r} are to be stored in an array of size m >= n. The items are presented sequentially in an arbitrary order, and must be st ored in the array in sorted order (but not necessarily in consecutive locations in the array). Each new item must be stored in the array before the next item is received. If r<=m then we can simply store item j in location j but if r>m then we may have to shift the location of stored items to make space for a newly arrived item. The algorithm is charged each time an item is stored in the array, or moved to a new location. The goal is to minimize the total number of such moves done by the algorithm. This problem is non-trivial when n=<m<r. In the case that m=Cn for some C>1, algorithms for this problem with cost O(log(n)^2) per item have been given [IKR81, Wil92, BCD+02]. When m=n, algorithms with cost O(log(n)^3) per item were given [Zha93, BS07]. In this paper we prove lower bounds that show that these algorithms are optimal, up to constant factors. Previously, the only lower bound known for this range of parameters was a lower bound of Omega(log(n)^2) for the restricted class of smooth algorithms [DSZ05a, Zha93]. We also provide an algorithm for the sparse case: If the number of items is polylogarithmic in the array size then the problem can be solved in amortized constant time per item.
We show that graphs that do not contain a theta, pyramid, prism, or turtle as an induced subgraph have polynomially many minimal separators. This result is the best possible in the sense that there are graphs with exponentially many minimal separator s if only three of the four induced subgraphs are excluded. As a consequence, there is a polynomial time algorithm to solve the maximum weight independent set problem for the class of (theta, pyramid, prism, turtle)-free graphs. Since every prism, theta, and turtle contains an even hole, this also implies a polynomial time algorithm to solve the maximum weight independent set problem for the class of (pyramid, even hole)-free graphs.
In this paper, we show a connection between a certain online low-congestion routing problem and an online prediction of graph labeling. More specifically, we prove that if there exists a routing scheme that guarantees a congestion of $alpha$ on any e dge, there exists an online prediction algorithm with mistake bound $alpha$ times the cut size, which is the size of the cut induced by the label partitioning of graph vertices. With previous known bound of $O(log n)$ for $alpha$ for the routing problem on trees with $n$ vertices, we obtain an improved prediction algorithm for graphs with high effective resistance. In contrast to previous approaches that move the graph problem into problems in vector space using graph Laplacian and rely on the analysis of the perceptron algorithm, our proof are purely combinatorial. Further more, our approach directly generalizes to the case where labels are not binary.
Hub Labeling (HL) is a data structure for distance oracles. Hierarchical HL (HHL) is a special type of HL, that received a lot of attention from a practical point of view. However, theoretical questions such as NP-hardness and approximation guarantee for HHL algorithms have been left aside. In this paper we study HL and HHL from the complexity theory point of view. We prove that both HL and HHL are NP-hard, and present upper and lower bounds for the approximation ratios of greedy HHL algorithms used in practice. We also introduce a new variant of the greedy HHL algorithm and a proof that it produces small labels for graphs with small highway dimension.
Given a graph, an $L(p,1)$-labeling of the graph is an assignment $f$ from the vertex set to the set of nonnegative integers such that for any pair of vertices $(u,v),|f (u) - f (v)| ge p$ if $u$ and $v$ are adjacent, and $f(u) eq f(v)$ if $u$ and $ v$ are at distance $2$. The $L(p,1)$-labeling problem is to minimize the span of $f$ (i.e.,$max_{uin V}(f(u)) - min_{uin V}(f(u))+1$). It is known to be NP-hard even for graphs of maximum degree $3$ or graphs with tree-width 2, whereas it is fixed-parameter tractable with respect to vertex cover number. Since vertex cover number is a kind of the strongest parameter, there is a large gap between tractability and intractability from the viewpoint of parameterization. To fill up the gap, in this paper, we propose new fixed-parameter algorithms for $L(p,1)$-Labeling by the twin cover number plus the maximum clique size and by the tree-width plus the maximum degree. These algorithms reduce the gap in terms of several combinations of parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا