No Arabic abstract
We introduce and begin to explore the mean and median of finite sets of shapes represented as integral currents. The median can be computed efficiently in practice, and we focus most of our theoretical and computational attention on medians. We consider questions on the existence and regularity of medians. While the median might not exist in all cases, we show that a mass-regularized median is guaranteed to exist. When the input shapes are modeled by integral currents with shared boundaries in codimension $1$, we show that the median is guaranteed to exist, and is contained in the emph{envelope} of the input currents. On the other hand, we show that medians can be emph{wild} in this setting, and smooth inputs can generate non-smooth medians. For higher codimensions, we show that emph{books} are minimizing for a finite set of $1$-currents in $Bbb{R}^3$ with shared boundaries. As part of this proof, we present a new result in graph theory---that emph{cozy} graphs are emph{comfortable}---which should be of independent interest. Further, we show that regular points on the median have book-like tangent cones in this case. From the point of view of computation, we study the median shape in the settings of a finite simplicial complex. When the input shapes are represented by chains of the simplicial complex, we show that the problem of finding the median shape can be formulated as an integer linear program. This optimization problem can be solved as a linear program in practice, thus allowing one to compute median shapes efficiently. We provide open source code implementing our methods, which could also be used by anyone to experiment with ideas of their own. The software could be accessed at https://github.com/tbtraltaa/medianshape.
Currents represent generalized surfaces studied in geometric measure theory. They range from relatively tame integral currents representing oriented compact manifolds with boundary and integer multiplicities, to arbitrary elements of the dual space of differential forms. The flat norm provides a natural distance in the space of currents, and works by decomposing a $d$-dimensional current into $d$- and (the boundary of) $(d+1)$-dimensional pieces in an optimal way. Given an integral current, can we expect its flat norm decomposition to be integral as well? This is not known in general, except in the case of $d$-currents that are boundaries of $(d+1)$-currents in $mathbb{R}^{d+1}$ (following results from a corresponding problem on the $L^1$ total variation ($L^1$TV) of functionals). On the other hand, for a discretized flat norm on a finite simplicial complex, the analogous statement holds even when the inputs are not boundaries. This simplicial version relies on the total unimodularity of the boundary matrix of the simplicial complex -- a result distinct from the $L^1$TV approach. We develop an analysis framework that extends the result in the simplicial setting to one for $d$-currents in $mathbb{R}^{d+1}$, provided a suitable triangulation result holds. In $mathbb{R}^2$, we use a triangulation result of Shewchuk (bounding both the size and location of small angles), and apply the framework to show that the discrete result implies the continuous result for $1$-currents in $mathbb{R}^2$.
Spectral geometric methods have brought revolutionary changes to the field of geometry processing -- however, when the data to be processed exhibits severe partiality, such methods fail to generalize. As a result, there exists a big performance gap between methods dealing with complete shapes, and methods that address missing geometry. In this paper, we propose a possible way to fill this gap. We introduce the first method to compute compositions of non-rigidly deforming shapes, without requiring to solve first for a dense correspondence between the given partial shapes. We do so by operating in a purely spectral domain, where we define a union operation between short sequences of eigenvalues. Working with eigenvalues allows to deal with unknown correspondence, different sampling, and different discretization (point clouds and meshes alike), making this operation especially robust and general. Our approach is data-driven, and can generalize to isometric and non-isometric deformations of the surface, as long as these stay within the same semantic class (e.g., human bodies), as well as to partiality artifacts not seen at training time.
This is an overview article. In his Habilitationsvortrag, Riemann described infinite dimensional manifolds parameterizing functions and shapes of solids. This is taken as an excuse to describe convenient calculus in infinite dimensions which allows for short and transparent proofs of the main facts of the theory of manifolds of smooth mappings. Smooth manifolds of immersions, diffeomorphisms, and shapes, and weak Riemannian metrics on them are treated, culminating in the surprising fact, that geodesic distance can vanish completely for them.
We propose Deep Estimators of Features (DEFs), a learning-based framework for predicting sharp geometric features in sampled 3D shapes. Differently from existing data-driven methods, which reduce this problem to feature classification, we propose to regress a scalar field representing the distance from point samples to the closest feature line on local patches. Our approach is the first that scales to massive point clouds by fusing distance-to-feature estimates obtained on individual patches. We extensively evaluate our approach against five baselines on newly proposed synthetic and real-world 3D CAD model benchmarks. Our approach not only outperforms the baselines (with improvements in Recall and False Positives Rates), but generalizes to real-world scans after training our model on synthetic data and fine-tuning it on a small dataset of scanned data. We demonstrate a downstream application, where we reconstruct an explicit representation of straight and curved sharp feature lines from range scan data.
In order to use persistence diagrams as a true statistical tool, it would be very useful to have a good notion of mean and variance for a set of diagrams. In 2011, Mileyko and his collaborators made the first study of the properties of the Frechet mean in $(mathcal{D}_p,W_p)$, the space of persistence diagrams equipped with the p-th Wasserstein metric. In particular, they showed that the Frechet mean of a finite set of diagrams always exists, but is not necessarily unique. The means of a continuously-varying set of diagrams do not themselves (necessarily) vary continuously, which presents obvious problems when trying to extend the Frechet mean definition to the realm of vineyards. We fix this problem by altering the original definition of Frechet mean so that it now becomes a probability measure on the set of persistence diagrams; in a nutshell, the mean of a set of diagrams will be a weighted sum of atomic measures, where each atom is itself a persistence diagram determined using a perturbation of the input diagrams. This definition gives for each $N$ a map $(mathcal{D}_p)^N to mathbb{P}(mathcal{D}_p)$. We show that this map is Holder continuous on finite diagrams and thus can be used to build a useful statistic on time-varying persistence diagrams, better known as vineyards.