Z2P: Instant Rendering of Point Clouds


Abstract in English

We present a technique for rendering point clouds using a neural network. Existing point rendering techniques either use splatting, or first reconstruct a surface mesh that can then be rendered. Both of these techniques require solving for global point normal orientation, which is a challenging problem on its own. Furthermore, splatting techniques result in holes and overlaps, whereas mesh reconstruction is particularly challenging, especially in the cases of thin surfaces and sheets. We cast the rendering problem as a conditional image-to-image translation problem. In our formulation, Z2P, i.e., depth-augmented point features as viewed from target camera view, are directly translated by a neural network to rendered images, conditioned on control variables (e.g., color, light). We avoid inevitable issues with splatting (i.e., holes and overlaps), and bypass solving the notoriously challenging surface reconstruction problem or estimating oriented normals. Yet, our approach results in a rendered image as if a surface mesh was reconstructed. We demonstrate that our framework produces a plausible image, and can effectively handle noise, non-uniform sampling, thin surfaces / sheets, and is fast.

Download