Model-Centric Volumetric Point Cloud Attributes


الملخص بالإنكليزية

Point clouds have recently gained interest, especially for real-time applications and for 3D-scanned material, such as is used in autonomous driving, architecture, and engineering, to model real estate for renovation or display. Point clouds are associated with geometry information and attributes such as color. Be the color unique or direction-dependent (in the case of plenoptic point clouds), it reflects the colors observed by cameras displaced around the object. Hence, not only are the viewing references assumed, but the illumination spectrum and illumination geometry is also implicit. We propose a model-centric description of the 3D object, that is independent of the illumination and of the position of the cameras. We want to be able to describe the objects themselves such that, at a later stage, the rendering of the model may decide where to place illumination, from which it may calculate the image viewed by a given camera. We want to be able to describe transparent or translucid objects, mirrors, fishbowls, fog and smoke. Volumetric clouds may allow us to describe the air, however ``empty, and introduce air particles, in a manner independent of the viewer position. For that, we rely on some eletromagnetic properties to arrive at seven attributes per voxel that would describe the material and its color or transparency. Three attributes are for the transmissivity of each color, three are for the attenuation of each color, and another attribute is for diffuseness. These attributes give information about the object to the renderer, with whom lies the decision on how to render and depict each object.

تحميل البحث