MeshCNN Fundamentals: Geometric Learning through a Reconstructable Representation


Abstract in English

Mesh-based learning is one of the popular approaches nowadays to learn shapes. The most established backbone in this field is MeshCNN. In this paper, we propose infusing MeshCNN with geometric reasoning to achieve higher quality learning. Through careful analysis of the way geometry is represented through-out the network, we submit that this representation should be rigid motion invariant, and should allow reconstructing the original geometry. Accordingly, we introduce the first and second fundamental forms as an edge-centric, rotation and translation invariant, reconstructable representation. In addition, we update the originally proposed pooling scheme to be more geometrically driven. We validate our analysis through experimentation, and present consistent improvement upon the MeshCNN baseline, as well as other more elaborate state-of-the-art architectures. Furthermore, we demonstrate this fundamental forms-based representation opens the door to accessible generative machine learning over meshes.

Download