ترغب بنشر مسار تعليمي؟ اضغط هنا

The VC Dimension of Metric Balls under Frechet and Hausdorff Distances

62   0   0.0 ( 0 )
 نشر من قبل Ioannis Psarros
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Vapnik-Chervonenkis dimension provides a notion of complexity for systems of sets. If the VC dimension is small, then knowing this can drastically simplify fundamental computational tasks such as classification, range counting, and density estimation through the use of sampling bounds. We analyze set systems where the ground set $X$ is a set of polygonal curves in $mathbb{R}^d$ and the sets $mathcal{R}$ are metric balls defined by curve similarity metrics, such as the Frechet distance and the Hausdorff distance, as well as their discrete counterparts. We derive upper and lower bounds on the VC dimension that imply useful sampling bounds in the setting that the number of curves is large, but the complexity of the individual curves is small. Our upper bounds are either near-quadratic or near-linear in the complexity of the curves that define the ranges and they are logarithmic in the complexity of the curves that define the ground set.



قيم البحث

اقرأ أيضاً

In many real-world applications data come as discrete metric spaces sampled around 1-dimensional filamentary structures that can be seen as metric graphs. In this paper we address the metric reconstruction problem of such filamentary structures from data sampled around them. We prove that they can be approximated, with respect to the Gromov-Hausdorff distance by well-chosen Reeb graphs (and some of their variants) and we provide an efficient and easy to implement algorithm to compute such approximations in almost linear time. We illustrate the performances of our algorithm on a few synthetic and real data sets.
We study approximate-near-neighbor data structures for time series under the continuous Frechet distance. For an attainable approximation factor $c>1$ and a query radius $r$, an approximate-near-neighbor data structure can be used to preprocess $n$ c urves in $mathbb{R}$ (aka time series), each of complexity $m$, to answer queries with a curve of complexity $k$ by either returning a curve that lies within Frechet distance $cr$, or answering that there exists no curve in the input within distance $r$. In both cases, the answer is correct. Our first data structure achieves a $(5+epsilon)$ approximation factor, uses space in $ncdot mathcal{O}left({epsilon^{-1}}right)^{k} + mathcal{O}(nm)$ and has query time in $mathcal{O}left(kright)$. Our second data structure achieves a $(2+epsilon)$ approximation factor, uses space in $ncdot mathcal{O}left(frac{m}{kepsilon}right)^{k} + mathcal{O}(nm)$ and has query time in $mathcal{O}left(kcdot 2^kright)$. Our third positive result is a probabilistic data structure based on locality-sensitive hashing, which achieves space in $mathcal{O}(nlog n+nm)$ and query time in $mathcal{O}(klog n)$, and which answers queries with an approximation factor in $mathcal{O}(k)$. All of our data structures make use of the concept of signatures, which were originally introduced for the problem of clustering time series under the Frechet distance. In addition, we show lower bounds for this problem. Consider any data structure which achieves an approximation factor less than $2$ and which supports curves of arclength up to $L$ and answers the query using only a constant number of probes. We show that under reasonable assumptions on the word size any such data structure needs space in $L^{Omega(k)}$.
Computing the similarity of two point sets is a ubiquitous task in medical imaging, geometric shape comparison, trajectory analysis, and many more settings. Arguably the most basic distance measure for this task is the Hausdorff distance, which assig ns to each point from one set the closest point in the other set and then evaluates the maximum distance of any assigned pair. A drawback is that this distance measure is not translational invariant, that is, comparing two objects just according to their shape while disregarding their position in space is impossible. Fortunately, there is a canonical translational invariant version, the Hausdorff distance under translation, which minimizes the Hausdorff distance over all translations of one of the point sets. For point sets of size $n$ and $m$, the Hausdorff distance under translation can be computed in time $tilde O(nm)$ for the $L_1$ and $L_infty$ norm [Chew, Kedem SWAT92] and $tilde O(nm (n+m))$ for the $L_2$ norm [Huttenlocher, Kedem, Sharir DCG93]. As these bounds have not been improved for over 25 years, in this paper we approach the Hausdorff distance under translation from the perspective of fine-grained complexity theory. We show (i) a matching lower bound of $(nm)^{1-o(1)}$ for $L_1$ and $L_infty$ (and all other $L_p$ norms) assuming the Orthogonal Vectors Hypothesis and (ii) a matching lower bound of $n^{2-o(1)}$ for $L_2$ in the imbalanced case of $m = O(1)$ assuming the 3SUM Hypothesis.
We study the $c$-approximate near neighbor problem under the continuous Frechet distance: Given a set of $n$ polygonal curves with $m$ vertices, a radius $delta > 0$, and a parameter $k leq m$, we want to preprocess the curves into a data structure t hat, given a query curve $q$ with $k$ vertices, either returns an input curve with Frechet distance at most $ccdot delta$ to $q$, or returns that there exists no input curve with Frechet distance at most $delta$ to $q$. We focus on the case where the input and the queries are one-dimensional polygonal curves -- also called time series -- and we give a comprehensive analysis for this case. We obtain new upper bounds that provide different tradeoffs between approximation factor, preprocessing time, and query time. Our data structures improve upon the state of the art in several ways. We show that for any $0 < varepsilon leq 1$ an approximation factor of $(1+varepsilon)$ can be achieved within the same asymptotic time bounds as the previously best result for $(2+varepsilon)$. Moreover, we show that an approximation factor of $(2+varepsilon)$ can be obtained by using preprocessing time and space $O(nm)$, which is linear in the input size, and query time in $O(frac{1}{varepsilon})^{k+2}$, where the previously best result used preprocessing time in $n cdot O(frac{m}{varepsilon k})^k$ and query time in $O(1)^k$. We complement our upper bounds with matching conditional lower bounds based on the Orthogonal Vectors Hypothesis. Interestingly, some of our lower bounds already hold for any super-constant value of $k$. This is achieved by proving hardness of a one-sided sparse version of the Orthogonal Vectors problem as an intermediate problem, which we believe to be of independent interest.
The Frechet distance is a popular distance measure for curves which naturally lends itself to fundamental computational tasks, such as clustering, nearest-neighbor searching, and spherical range searching in the corresponding metric space. However, i ts inherent complexity poses considerable computational challenges in practice. To address this problem we study distortion of the probabilistic embedding that results from projecting the curves to a randomly chosen line. Such an embedding could be used in combination with, e.g. locality-sensitive hashing. We show that in the worst case and under reasonable assumptions, the discrete Frechet distance between two polygonal curves of complexity $t$ in $mathbb{R}^d$, where $dinlbrace 2,3,4,5rbrace$, degrades by a factor linear in $t$ with constant probability. We show upper and lower bounds on the distortion. We also evaluate our findings empirically on a benchmark data set. The preliminary experimental results stand in stark contrast with our lower bounds. They indicate that highly distorted projections happen very rarely in practice, and only for strongly conditioned input curves. Keywords: Frechet distance, metric embeddings, random projections
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا