Do you want to publish a course? Click here

Surface Light Field Compression using a Point Cloud Codec

70   0   0.0 ( 0 )
 Added by Xiang Zhang
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Light field (LF) representations aim to provide photo-realistic, free-viewpoint viewing experiences. However, the most popular LF representations are images from multiple views. Multi-view image-based representations generally need to restrict the range or degrees of freedom of the viewing experience to what can be interpolated in the image domain, essentially because they lack explicit geometry information. We present a new surface light field (SLF) representation based on explicit geometry, and a method for SLF compression. First, we map the multi-view images of a scene onto a 3D geometric point cloud. The color of each point in the point cloud is a function of viewing direction known as a view map. We represent each view map efficiently in a B-Spline wavelet basis. This representation is capable of modeling diverse surface materials and complex lighting conditions in a highly scalable and adaptive manner. The coefficients of the B-Spline wavelet representation are then compressed spatially. To increase the spatial correlation and thus improve compression efficiency, we introduce a smoothing term to make the coefficients more similar across the 3D space. We compress the coefficients spatially using existing point cloud compression (PCC) methods. On the decoder side, the scene is rendered efficiently from any viewing direction by reconstructing the view map at each point. In contrast to multi-view image-based LF approaches, our method supports photo-realistic rendering of real-world scenes from arbitrary viewpoints, i.e., with an unlimited six degrees of freedom (6DOF). In terms of rate and distortion, experimental results show that our method achieves superior performance with lighter decoder complexity compared with a reference image-plus-geometry compression (IGC) scheme, indicating its potential in practical virtual and augmented reality applications.



rate research

Read More

171 - Anique Akhtar , Wen Gao , Li Li 2021
Photo-realistic point cloud capture and transmission are the fundamental enablers for immersive visual communication. The coding process of dynamic point clouds, especially video-based point cloud compression (V-PCC) developed by the MPEG standardization group, is now delivering state-of-the-art performance in compression efficiency. V-PCC is based on the projection of the point cloud patches to 2D planes and encoding the sequence as 2D texture and geometry patch sequences. However, the resulting quantization errors from coding can introduce compression artifacts, which can be very unpleasant for the quality of experience (QoE). In this work, we developed a novel out-of-the-loop point cloud geometry artifact removal solution that can significantly improve reconstruction quality without additional bandwidth cost. Our novel framework consists of a point cloud sampling scheme, an artifact removal network, and an aggregation scheme. The point cloud sampling scheme employs a cube-based neighborhood patch extraction to divide the point cloud into patches. The geometry artifact removal network then processes these patches to obtain artifact-removed patches. The artifact-removed patches are then merged together using an aggregation scheme to obtain the final artifact-removed point cloud. We employ 3D deep convolutional feature learning for geometry artifact removal that jointly recovers both the quantization direction and the quantization noise level by exploiting projection and quantization prior. The simulation results demonstrate that the proposed method is highly effective and can considerably improve the quality of the reconstructed point cloud.
The quality assessment of light field images presents new challenges to conventional compression methods, as the spatial quality is affected by the optical distortion of capturing devices, and the angular consistency affects the performance of dynamic rendering applications. In this paper, we propose a two-pass encoding system for pseudo-temporal sequence based light field image compression with a novel frame level bit allocation framework that optimizes spatial quality and angular consistency simultaneously. Frame level rate-distortion models are estimated during the first pass, and the second pass performs the actual encoding with optimized bit allocations given by a two-step convex programming. The proposed framework supports various encoder configurations. Experimental results show that comparing to the anchor HM 16.16 (HEVC reference software), the proposed two-pass encoding system on average achieves 11.2% to 11.9% BD-rate reductions for the all-intra configuration, 15.8% to 32.7% BD-rate reductions for the random-access configuration, and 12.1% to 15.7% BD-rate reductions for the low-delay configuration. The resulting bit errors are limited, and the total time cost is less than twice of the one-pass anchor. Comparing with our earlier low-delay configuration based method, the proposed system improves BD-rate reduction by 3.1% to 8.3%, reduces the bit errors by more than 60%, and achieves more than 12x speed up.
In video-based dynamic point cloud compression (V-PCC), 3D point clouds are projected onto 2D images for compressing with the existing video codecs. However, the existing video codecs are originally designed for natural visual signals, and it fails to account for the characteristics of point clouds. Thus, there are still problems in the compression of geometry information generated from the point clouds. Firstly, the distortion model in the existing rate-distortion optimization (RDO) is not consistent with the geometry quality assessment metrics. Secondly, the prediction methods in video codecs fail to account for the fact that the highest depth values of a far layer is greater than or equal to the corresponding lowest depth values of a near layer. This paper proposes an advanced geometry surface coding (AGSC) method for dynamic point clouds (DPC) compression. The proposed method consists of two modules, including an error projection model-based (EPM-based) RDO and an occupancy map-based (OM-based) merge prediction. Firstly, the EPM model is proposed to describe the relationship between the distortion model in the existing video codec and the geometry quality metric. Secondly, the EPM-based RDO method is presented to project the existing distortion model on the plane normal and is simplified to estimate the average normal vectors of coding units (CUs). Finally, we propose the OM-based merge prediction approach, in which the prediction pixels of merge modes are refined based on the occupancy map. Experiments tested on the standard point clouds show that the proposed method achieves an average 9.84% bitrate saving for geometry compression.
Compared with conventional image and video, light field images introduce the weight channel, as well as the visual consistency of rendered view, information that has to be taken into account when compressing the pseudo-temporal-sequence (PTS) created from light field images. In this paper, we propose a novel frame level bit allocation framework for PTS coding. A joint model that measures weighted distortion and visual consistency, combined with an iterative encoding system, yields the optimal bit allocation for each frame by solving a convex optimization problem. Experimental results show that the proposed framework is effective in producing desired distortion distribution based on weights, and achieves up to 24.7% BD-rate reduction comparing to the default rate control algorithm.
Compression of point clouds has so far been confined to coding the positions of a discrete set of points in space and the attributes of those discrete points. We introduce an alternative approach based on volumetric functions, which are functions defined not just on a finite set of points, but throughout space. As in regression analysis, volumetric functions are continuous functions that are able to interpolate values on a finite set of points as linear combinations of continuous basis functions. Using a B-spline wavelet basis, we are able to code volumetric functions representing both geometry and attributes. Geometry is represented implicitly as the level set of a volumetric function (the signed distance function or similar). Attributes are represented by a volumetric function whose coefficients can be regarded as a critically sampled orthonormal transform that generalizes the recent successful region-adaptive hierarchical (or Haar) transform to higher orders. Experimental results show that both geometry and attribute compression using volumetric functions improve over those used in the emerging MPEG Point Cloud Compression standard.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا