No Arabic abstract
We introduce ABC-Dataset, a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms. As a use case for our dataset, we perform a large-scale benchmark for estimation of surface normals, comparing existing data driven methods and evaluating their performance against both the ground truth and traditional normal estimation methods.
Mesh-based learning is one of the popular approaches nowadays to learn shapes. The most established backbone in this field is MeshCNN. In this paper, we propose infusing MeshCNN with geometric reasoning to achieve higher quality learning. Through careful analysis of the way geometry is represented through-out the network, we submit that this representation should be rigid motion invariant, and should allow reconstructing the original geometry. Accordingly, we introduce the first and second fundamental forms as an edge-centric, rotation and translation invariant, reconstructable representation. In addition, we update the originally proposed pooling scheme to be more geometrically driven. We validate our analysis through experimentation, and present consistent improvement upon the MeshCNN baseline, as well as other more elaborate state-of-the-art architectures. Furthermore, we demonstrate this fundamental forms-based representation opens the door to accessible generative machine learning over meshes.
Mesh reconstruction from a 3D point cloud is an important topic in the fields of computer graphic, computer vision, and multimedia analysis. In this paper, we propose a voxel structure-based mesh reconstruction framework. It provides the intrinsic metric to improve the accuracy of local region detection. Based on the detected local regions, an initial reconstructed mesh can be obtained. With the mesh optimization in our framework, the initial reconstructed mesh is optimized into an isotropic one with the important geometric features such as external and internal edges. The experimental results indicate that our framework shows great advantages over peer ones in terms of mesh quality, geometric feature keeping, and processing speed.
In this paper, we are concerned with geometric constraint solvers, i.e., with programs that find one or more solutions of a geometric constraint problem. If no solution exists, the solver is expected to announce that no solution has been found. Owing to the complexity, type or difficulty of a constraint problem, it is possible that the solver does not find a solution even though one may exist. Thus, there may be false negatives, but there should never be false positives. Intuitively, the ability to find solutions can be considered a measure of solvers competence. We consider static constraint problems and their solvers. We do not consider dynamic constraint solvers, also known as dynamic geometry programs, in which specific geometric elements are moved, interactively or along prescribed trajectories, while continually maintaining all stipulated constraints. However, if we have a solver for static constraint problems that is sufficiently fast and competent, we can build a dynamic geometry program from it by solving the static problem for a sufficiently dense sampling of the trajectory of the moving element(s). The work we survey has its roots in applications, especially in mechanical computer-aided design (MCAD). The constraint solvers used in MCAD took a quantum leap in the 1990s. These approaches solve a geometric constraint problem by an initial, graph-based structural analysis that extracts generic subproblems and determines how they would combine to form a complete solution. These subproblems are then handed to an algebraic solver that solves the specific instances of the generic subproblems and combines them.
Mesh denoising is a critical technology in geometry processing that aims to recover high-fidelity 3D mesh models of objects from their noise-corrupte
Recently, deep generative adversarial networks for image generation have advanced rapidly; yet, only a small amount of research has focused on generative models for irregular structures, particularly meshes. Nonetheless, mesh generation and synthesis remains a fundamental topic in computer graphics. In this work, we propose a novel framework for synthesizing geometric textures. It learns geometric texture statistics from local neighborhoods (i.e., local triangular patches) of a single reference 3D model. It learns deep features on the faces of the input triangulation, which is used to subdivide and generate offsets across multiple scales, without parameterization of the reference or target mesh. Our network displaces mesh vertices in any direction (i.e., in the normal and tangential direction), enabling synthesis of geometric textures, which cannot be expressed by a simple 2D displacement map. Learning and synthesizing on local geometric patches enables a genus-oblivious framework, facilitating texture transfer between shapes of different genus.