Do you want to publish a course? Click here

Simplification of Indoor Space Footprints

165   0   0.0 ( 0 )
 Added by Joon-Seok Kim
 Publication date 2020
and research's language is English
 Authors Joon-Seok Kim




Ask ChatGPT about the research

Simplification is one of the fundamental operations used in geoinformation science (GIS) to reduce size or representation complexity of geometric objects. Although different simplification methods can be applied depending on ones purpose, a simplification that many applications employ is designed to preserve their spatial properties after simplification. This article addresses one of the 2D simplification methods, especially working well on human-made structures such as 2D footprints of buildings and indoor spaces. The method simplifies polygons in an iterative manner. The simplification is segment-wise and takes account of intrusion, extrusion, offset, and corner portions of 2D structures preserving its dominant frame.

rate research

Read More

We study the problem of polygonal curve simplification under uncertainty, where instead of a sequence of exact points, each uncertain point is represented by a region, which contains the (unknown) true location of the vertex. The regions we consider are disks, line segments, convex polygons, and discrete sets of points. We are interested in finding the shortest subsequence of uncertain points such that no matter what the true location of each uncertain point is, the resulting polygonal curve is a valid simplification of the original polygonal curve under the Hausdorff or the Frechet distance. For both these distance measures, we present polynomial-time algorithms for this problem.
In the classic polyline simplification problem we want to replace a given polygonal curve $P$, consisting of $n$ vertices, by a subsequence $P$ of $k$ vertices from $P$ such that the polygonal curves $P$ and $P$ are as close as possible. Closeness is usually measured using the Hausdorff or Frechet distance. These distance measures can be applied globally, i.e., to the whole curves $P$ and $P$, or locally, i.e., to each simplified subcurve and the line segment that it was replaced with separately (and then taking the maximum). This gives rise to four problem variants: Global-Hausdorff (known to be NP-hard), Local-Hausdorff (in time $O(n^3)$), Global-Frechet (in time $O(k n^5)$), and Local-Frechet (in time $O(n^3)$). Our contribution is as follows. - Cubic time for all variants: For Global-Frechet we design an algorithm running in time $O(n^3)$. This shows that all three problems (Local-Hausdorff, Local-Frechet, and Global-Frechet) can be solved in cubic time. All these algorithms work over a general metric space such as $(mathbb{R}^d,L_p)$, but the hidden constant depends on $p$ and (linearly) on $d$. - Cubic conditional lower bound: We provide evidence that in high dimensions cubic time is essentially optimal for all three problems (Local-Hausdorff, Local-Frechet, and Global-Frechet). Specifically, improving the cubic time to $O(n^{3-epsilon} textrm{poly}(d))$ for polyline simplification over $(mathbb{R}^d,L_p)$ for $p = 1$ would violate plausible conjectures. We obtain similar results for all $p in [1,infty), p e 2$. In total, in high dimensions and over general $L_p$-norms we resolve the complexity of polyline simplification with respect to Local-Hausdorff, Local-Frechet, and Global-Frechet, by providing new algorithms and conditional lower bounds.
Understanding the shape of a scene from a single color image is a formidable computer vision task. However, most methods aim to predict the geometry of surfaces that are visible to the camera, which is of limited use when planning paths for robots or augmented reality agents. Such agents can only move when grounded on a traversable surface, which we define as the set of classes which humans can also walk over, such as grass, footpaths and pavement. Models which predict beyond the line of sight often parameterize the scene with voxels or meshes, which can be expensive to use in machine learning frameworks. We introduce a model to predict the geometry of both visible and occluded traversable surfaces, given a single RGB image as input. We learn from stereo video sequences, using camera poses, per-frame depth and semantic segmentation to form training data, which is used to supervise an image-to-image network. We train models from the KITTI driving dataset, the indoor Matterport dataset, and from our own casually captured stereo footage. We find that a surprisingly low bar for spatial coverage of training scenes is required. We validate our algorithm against a range of strong baselines, and include an assessment of our predictions for a path-planning task.
159 - Massimo Franceschet 2009
Bibliometrics has the ambitious goal of measuring science. To this end, it exploits the way science is disseminated trough scientific publications and the resulting citation network of scientific papers. We survey the main historical contributions to the field, the most interesting bibliometric indicators, and the most popular bibliometric data sources. Moreover, we discuss distributions commonly used to model bibliometric phenomena and give an overview of methods to build bibliometric maps of science.
We show that in the mathematical framework of the quantum theory the classical pigeonhole principle can be violated more directly than previously suggested, i.e., in a setting closer to the traditional statement of the principle. We describe how the counterfactual reasoning of the paradox may be operationally grounded in the analysis of the tiny footprints left in the environment by the pigeons. After identifying the drawbacks of recent experiments of the quantum pigeonhole effect, we argue that a definitive experimental violation of the pigeonhole principle is still needed and propose such an implementation using modern quantum computing hardware: a superconducting circuit with transmon qubits.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا