ترغب بنشر مسار تعليمي؟ اضغط هنا

Polyline Simplification has Cubic Complexity

112   0   0.0 ( 0 )
 نشر من قبل Karl Bringmann
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the classic polyline simplification problem we want to replace a given polygonal curve $P$, consisting of $n$ vertices, by a subsequence $P$ of $k$ vertices from $P$ such that the polygonal curves $P$ and $P$ are as close as possible. Closeness is usually measured using the Hausdorff or Frechet distance. These distance measures can be applied globally, i.e., to the whole curves $P$ and $P$, or locally, i.e., to each simplified subcurve and the line segment that it was replaced with separately (and then taking the maximum). This gives rise to four problem variants: Global-Hausdorff (known to be NP-hard), Local-Hausdorff (in time $O(n^3)$), Global-Frechet (in time $O(k n^5)$), and Local-Frechet (in time $O(n^3)$). Our contribution is as follows. - Cubic time for all variants: For Global-Frechet we design an algorithm running in time $O(n^3)$. This shows that all three problems (Local-Hausdorff, Local-Frechet, and Global-Frechet) can be solved in cubic time. All these algorithms work over a general metric space such as $(mathbb{R}^d,L_p)$, but the hidden constant depends on $p$ and (linearly) on $d$. - Cubic conditional lower bound: We provide evidence that in high dimensions cubic time is essentially optimal for all three problems (Local-Hausdorff, Local-Frechet, and Global-Frechet). Specifically, improving the cubic time to $O(n^{3-epsilon} textrm{poly}(d))$ for polyline simplification over $(mathbb{R}^d,L_p)$ for $p = 1$ would violate plausible conjectures. We obtain similar results for all $p in [1,infty), p e 2$. In total, in high dimensions and over general $L_p$-norms we resolve the complexity of polyline simplification with respect to Local-Hausdorff, Local-Frechet, and Global-Frechet, by providing new algorithms and conditional lower bounds.



قيم البحث

اقرأ أيضاً

In this article, we provide new structural results and algorithms for the Homotopy Height problem. In broad terms, this problem quantifies how much a curve on a surface needs to be stretched to sweep continuously between two positions. More precisely , given two homotopic curves $gamma_1$ and $gamma_2$ on a combinatorial (say, triangulated) surface, we investigate the problem of computing a homotopy between $gamma_1$ and $gamma_2$ where the length of the longest intermediate curve is minimized. Such optimal homotopies are relevant for a wide range of purposes, from very theoretical questions in quantitative homotopy theory to more practical applications such as similarity measures on meshes and graph searching problems. We prove that Homotopy Height is in the complexity class NP, and the corresponding exponential algorithm is the best one known for this problem. This result builds on a structural theorem on monotonicity of optimal homotopies, which is proved in a companion paper. Then we show that this problem encompasses the Homotopic Frechet distance problem which we therefore also establish to be in NP, answering a question which has previously been considered in several different settings. We also provide an O(log n)-approximation algorithm for Homotopy Height on surfaces by adapting an earlier algorithm of Har-Peled, Nayyeri, Salvatipour and Sidiropoulos in the planar setting.
We study the problem of polygonal curve simplification under uncertainty, where instead of a sequence of exact points, each uncertain point is represented by a region, which contains the (unknown) true location of the vertex. The regions we consider are disks, line segments, convex polygons, and discrete sets of points. We are interested in finding the shortest subsequence of uncertain points such that no matter what the true location of each uncertain point is, the resulting polygonal curve is a valid simplification of the original polygonal curve under the Hausdorff or the Frechet distance. For both these distance measures, we present polynomial-time algorithms for this problem.
164 - Joon-Seok Kim 2020
Simplification is one of the fundamental operations used in geoinformation science (GIS) to reduce size or representation complexity of geometric objects. Although different simplification methods can be applied depending on ones purpose, a simplific ation that many applications employ is designed to preserve their spatial properties after simplification. This article addresses one of the 2D simplification methods, especially working well on human-made structures such as 2D footprints of buildings and indoor spaces. The method simplifies polygons in an iterative manner. The simplification is segment-wise and takes account of intrusion, extrusion, offset, and corner portions of 2D structures preserving its dominant frame.
We study a simple reconfigurable robot model which has not been previously examined: cubic robots comprised of three-dimensional cubic modules which can slide across each other and rotate about each others edges. We demonstrate that the cubic robot m odel is universal, i.e., that an n-module cubic robot can reconfigure itself into any specified n-module configuration. Additionally, we provide an algorithm that efficiently plans and executes cubic robot motion. Our results directly extend to a d-dimensional model.
Sentence simplification is the task of rewriting texts so they are easier to understand. Recent research has applied sequence-to-sequence (Seq2Seq) models to this task, focusing largely on training-time improvements via reinforcement learning and mem ory augmentation. One of the main problems with applying generic Seq2Seq models for simplification is that these models tend to copy directly from the original sentence, resulting in outputs that are relatively long and complex. We aim to alleviate this issue through the use of two main techniques. First, we incorporate content word complexities, as predicted with a leveled word complexity model, into our loss function during training. Second, we generate a large set of diverse candidate simplifications at test time, and rerank these to promote fluency, adequacy, and simplicity. Here, we measure simplicity through a novel sentence complexity model. These extensions allow our models to perform competitively with state-of-the-art systems while generating simpler sentences. We report standard automatic and human evaluation metrics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا