No Arabic abstract
This paper introduces progressive algorithms for the topological analysis of scalar data. Our approach is based on a hierarchical representation of the input data and the fast identification of topologically invariant vertices, which are vertices that have no impact on the topological description of the data and for which we show that no computation is required as they are introduced in the hierarchy. This enables the definition of efficient coarse-to-fine topological algorithms, which leverage fast update mechanisms for ordinary vertices and avoid computation for the topologically invariant ones. We demonstrate our approach with two examples of topological algorithms (critical point extraction and persistence diagram computation), which generate interpretable outputs upon interruption requests and which progressively refine them otherwise. Experiments on real-life datasets illustrate that our progressive strategy, in addition to the continuous visual feedback it provides, even improves run time performance with regard to non-progressive algorithms and we describe further accelerations with shared-memory parallelism. We illustrate the utility of our approach in batch-mode and interactive setups, where it respectively enables the control of the execution time of complete topological pipelines as well as previews of the topological features found in a dataset, with progressive updates delivered within interactive times.
This paper presents an efficient algorithm for the progressive approximation of Wasserstein barycenters of persistence diagrams, with applications to the visual analysis of ensemble data. Given a set of scalar fields, our approach enables the computation of a persistence diagram which is representative of the set, and which visually conveys the number, data ranges and saliences of the main features of interest found in the set. Such representative diagrams are obtained by computing explicitly the discrete Wasserstein barycenter of the set of persistence diagrams, a notoriously computationally intensive task. In particular, we revisit efficient algorithms for Wasserstein distance approximation [12,51] to extend previous work on barycenter estimation [94]. We present a new fast algorithm, which progressively approximates the barycenter by iteratively increasing the computation accuracy as well as the number of persistent features in the output diagram. Such a progressivity drastically improves convergence in practice and allows to design an interruptible algorithm, capable of respecting computation time constraints. This enables the approximation of Wasserstein barycenters within interactive times. We present an application to ensemble clustering where we revisit the k-means algorithm to exploit our barycenters and compute, within execution time constraints, meaningful clusters of ensemble data along with their barycenter diagram. Extensive experiments on synthetic and real-life data sets report that our algorithm converges to barycenters that are qualitatively meaningful with regard to the applications, and quantitatively comparable to previous techniques, while offering an order of magnitude speedup when run until convergence (without time constraint). Our algorithm can be trivially parallelized to provide additional speedups in practice on standard workstations. [...]
This paper presents a novel technique for progressive online integration of uncalibrated image sequences with substantial geometric and/or photometric discrepancies into a single, geometrically and photometrically consistent image. Our approach can handle large sets of images, acquired from a nearly planar or infinitely distant scene at different resolutions in object domain and under variable local or global illumination conditions. It allows for efficient user guidance as its progressive nature provides a valid and consistent reconstruction at any moment during the online refinement process. Our approach avoids global optimization techniques, as commonly used in the field of image refinement, and progressively incorporates new imagery into a dynamically extendable and memory-efficient Laplacian pyramid. Our image registration process includes a coarse homography and a local refinement stage using optical flow. Photometric consistency is achieved by retaining the photometric intensities given in a reference image, while it is being refined. Globally blurred imagery and local geometric inconsistencies due to, e.g. motion are detected and removed prior to image fusion. We demonstrate the quality and robustness of our approach using several image and video sequences, including handheld acquisition with mobile phones and zooming sequences with consumer cameras.
Mesh-based learning is one of the popular approaches nowadays to learn shapes. The most established backbone in this field is MeshCNN. In this paper, we propose infusing MeshCNN with geometric reasoning to achieve higher quality learning. Through careful analysis of the way geometry is represented through-out the network, we submit that this representation should be rigid motion invariant, and should allow reconstructing the original geometry. Accordingly, we introduce the first and second fundamental forms as an edge-centric, rotation and translation invariant, reconstructable representation. In addition, we update the originally proposed pooling scheme to be more geometrically driven. We validate our analysis through experimentation, and present consistent improvement upon the MeshCNN baseline, as well as other more elaborate state-of-the-art architectures. Furthermore, we demonstrate this fundamental forms-based representation opens the door to accessible generative machine learning over meshes.
We present MeshODE, a scalable and robust framework for pairwise CAD model deformation without prespecified correspondences. Given a pair of shapes, our framework provides a novel shape feature-preserving mapping function that continuously deforms one model to the other by minimizing fitting and rigidity losses based on the non-rigid iterative-closest-point (ICP) algorithm. We address two challenges in this problem, namely the design of a powerful deformation function and obtaining a feature-preserving CAD deformation. While traditional deformation directly optimizes for the coordinates of the mesh vertices or the vertices of a control cage, we introduce a deep bijective mapping that utilizes a flow model parameterized as a neural network. Our function has the capacity to handle complex deformations, produces deformations that are guaranteed free of self-intersections, and requires low rigidity constraining for geometry preservation, which leads to a better fitting quality compared with existing methods. It additionally enables continuous deformation between two arbitrary shapes without supervision for intermediate shapes. Furthermore, we propose a robust preprocessing pipeline for raw CAD meshes using feature-aware subdivision and a uniform graph template representation to address artifacts in raw CAD models including self-intersections, irregular triangles, topologically disconnected components, non-manifold edges, and nonuniformly distributed vertices. This facilitates a fast deformation optimization process that preserves global and local details. Our code is publicly available.
Numerical computation of shortest paths or geodesics on curved domains, as well as the associated geodesic distance, arises in a broad range of applications across digital geometry processing, scientific computing, computer graphics, and computer vision. Relative to Euclidean distance computation, these tasks are complicated by the influence of curvature on the behavior of shortest paths, as well as the fact that the representation of the domain may itself be approximate. In spite of the difficulty of this problem, recent literature has developed a wide variety of sophisticated methods that enable rapid queries of geodesic information, even on relatively large models. This survey reviews the major categories of approaches to the computation of geodesic paths and distances, highlighting common themes and opportunities for future improvement.