Do you want to publish a course? Click here

Improved upper and lower bounds for the point placement problem

177   0   0.0 ( 0 )
 Added by Md. Shafiul Alam
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

The point placement problem is to determine the positions of a set of $n$ distinct points, P = {p1, p2, p3, ..., pn}, on a line uniquely, up to translation and reflection, from the fewest possible distance queries between pairs of points. Each distance query corresponds to an edge in a graph, called point placement graph ppg, whose vertex set is P. The uniqueness requirement of the placement translates to line rigidity of the ppg. In this paper we show how to construct in 2 rounds a line rigid point placement graph of size 9n/7 + O(1). This improves the existing best result of 4n/3 + O(1). We also improve the lower bound on 2-round algorithms from 17n/16 to 9n/8.



rate research

Read More

Classic dynamic data structure problems maintain a data structure subject to a sequence S of updates and they answer queries using the latest version of the data structure, i.e., the data structure after processing the whole sequence. To handle operations that change the sequence S of updates, Demaine et al. (TALG 2007) introduced retroactive data structures. A retroactive operation modifies the update sequence S in a given position t, called time, and either creates or cancels an update in S at time t. A partially retroactive data structure restricts queries to be executed exclusively in the latest version of the data structure. A fully retroactive data structure supports queries at any time t: a query at time t is answered using only the updates of S up to time t. If the sequence S only consists of insertions, the resulting data structure is an incremental retroactive data structure. While efficient retroactive data structures have been proposed for classic data structures, e.g., stack, priority queue and binary search tree, the retroactive version of graph problems are rarely studied. In this paper we study retroactive graph problems including connectivity, minimum spanning forest (MSF), maximum degree, etc. We provide fully retroactive data structures for maintaining the maximum degree, connectivity and MSF in $tilde{O}(n)$ time per operation. We also give an algorithm for the incremental fully retroactive connectivity with $tilde{O}(1)$ time per operation. We compliment our algorithms with almost tight hardness results. We show that under the OMv conjecture (proposed by Henzinger et al. (STOC 2015)), there does not exist fully retroactive data structures maintaining connectivity or MSF, or incremental fully retroactive data structure maintaining the maximum degree with $O(n^{1-epsilon})$ time per operation, for any constant $epsilon > 0$.
We consider a range of simply stated dynamic data structure problems on strings. An update changes one symbol in the input and a query asks us to compute some function of the pattern of length $m$ and a substring of a longer text. We give both conditional and unconditional lower bounds for variants of exact matching with wildcards, inner product, and Hamming distance computation via a sequence of reductions. As an example, we show that there does not exist an $O(m^{1/2-varepsilon})$ time algorithm for a large range of these problems unless the online Boolean matrix-vector multiplication conjecture is false. We also provide nearly matching upper bounds for most of the problems we consider.
We consider the file maintenance problem (also called the online labeling problem) in which n integer items from the set {1,...,r} are to be stored in an array of size m >= n. The items are presented sequentially in an arbitrary order, and must be stored in the array in sorted order (but not necessarily in consecutive locations in the array). Each new item must be stored in the array before the next item is received. If r<=m then we can simply store item j in location j but if r>m then we may have to shift the location of stored items to make space for a newly arrived item. The algorithm is charged each time an item is stored in the array, or moved to a new location. The goal is to minimize the total number of such moves done by the algorithm. This problem is non-trivial when n=<m<r. In the case that m=Cn for some C>1, algorithms for this problem with cost O(log(n)^2) per item have been given [IKR81, Wil92, BCD+02]. When m=n, algorithms with cost O(log(n)^3) per item were given [Zha93, BS07]. In this paper we prove lower bounds that show that these algorithms are optimal, up to constant factors. Previously, the only lower bound known for this range of parameters was a lower bound of Omega(log(n)^2) for the restricted class of smooth algorithms [DSZ05a, Zha93]. We also provide an algorithm for the sparse case: If the number of items is polylogarithmic in the array size then the problem can be solved in amortized constant time per item.
We study the quantum query complexity of two problems. First, we consider the problem of determining if a sequence of parentheses is a properly balanced one (a Dyck word), with a depth of at most $k$. We call this the $Dyck_{k,n}$ problem. We prove a lower bound of $Omega(c^k sqrt{n})$, showing that the complexity of this problem increases exponentially in $k$. Here $n$ is the length of the word. When $k$ is a constant, this is interesting as a representative example of star-free languages for which a surprising $tilde{O}(sqrt{n})$ query quantum algorithm was recently constructed by Aaronson et al. Their proof does not give rise to a general algorithm. When $k$ is not a constant, $Dyck_{k,n}$ is not context-free. We give an algorithm with $Oleft(sqrt{n}(log{n})^{0.5k}right)$ quantum queries for $Dyck_{k,n}$ for all $k$. This is better than the trival upper bound $n$ for $k=oleft(frac{log(n)}{loglog n}right)$. Second, we consider connectivity problems on grid graphs in 2 dimensions, if some of the edges of the grid may be missing. By embedding the balanced parentheses problem into the grid, we show a lower bound of $Omega(n^{1.5-epsilon})$ for the directed 2D grid and $Omega(n^{2-epsilon})$ for the undirected 2D grid. The directed problem is interesting as a black-box model for a class of classical dynamic programming strategies including the one that is usually used for the well-known edit distance problem. We also show a generalization of this result to more than 2 dimensions.
An assignment of colours to the vertices of a graph is stable if any two vertices of the same colour have identically coloured neighbourhoods. The goal of colour refinement is to find a stable colouring that uses a minimum number of colours. This is a widely used subroutine for graph isomorphism testing algorithms, since any automorphism needs to be colour preserving. We give an $O((m+n)log n)$ algorithm for finding a canonical version of such a stable colouring, on graphs with $n$ vertices and $m$ edges. We show that no faster algorithm is possible, under some modest assumptions about the type of algorithm, which captures all known colour refinement algorithms.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا