ترغب بنشر مسار تعليمي؟ اضغط هنا

A 3D Motion Vector Database for Dynamic Point Clouds

137   0   0.0 ( 0 )
 نشر من قبل Andre Luis Souto Ferreira
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Due to the large amount of data that point clouds represent and the differences in geometry of successive frames, the generation of motion vectors for an entire point cloud dataset may require a significant amount of time and computational resources. With that in mind, we provide a 3D motion vector database for all frames of two popular dynamic point cloud datasets. The motion vectors were obtained through translational motion estimation procedure that partitions the point clouds into blocks of dimensions M x M x M , and for each block, a motion vector is estimated. Our database contains motion vectors for M = 8 and M = 16. The goal of this work is to describe this publicly available 3D motion vector database that can be used for different purposes, such as compression of dynamic point clouds.

قيم البحث

اقرأ أيضاً

High Dynamic Range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR (SHDR) database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
The recently introduced coder based on region-adaptive hierarchical transform (RAHT) for the compression of point clouds attributes, was shown to have a performance competitive with the state-of-the-art, while being much less complex. In the paper Co mpression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform, top performance was achieved using arithmetic coding (AC), while adaptive run-length Golomb-Rice (RLGR) coding was presented as a lower-performance lower-complexity alternative. However, we have found that by reordering the RAHT coefficients we can largely increase the runs of zeros and significantly increase the performance of the RLGR-based RAHT coder. As a result, the new coder, using ordered coefficients, was shown to outperform all other coders, including AC-based RAHT, at an even lower computational cost. We present new results and plots that should enhance those in the work of Queiroz and Chou to include the new results for RLGR-RAHT. We risk to say, based on the results herein, that RLGR-RAHT with sorted coefficients is the new state-of-the-art in point cloud compression.
In this paper, we focus on subjective and objective Point Cloud Quality Assessment (PCQA) in an immersive environment and study the effect of geometry and texture attributes in compression distortion. Using a Head-Mounted Display (HMD) with six degre es of freedom, we establish a subjective PCQA database, named SIAT Point Cloud Quality Database (SIAT-PCQD). Our database consists of 340 distorted point clouds compressed by the MPEG point cloud encoder with the combination of 20 sequences and 17 pairs of geometry and texture quantization parameters. The impact of distorted geometry and texture attributes is further discussed in this paper. Then, we propose two projection-based objective quality evaluation methods, i.e., a weighted view projection based model and a patch projection based model. Our subjective database and findings can be used in point cloud processing, transmission, and coding, especially for virtual reality applications. The subjective dataset has been released in the public repository.
With the help of the deep learning paradigm, many point cloud networks have been invented for visual analysis. However, there is great potential for development of these networks since the given information of point cloud data has not been fully expl oited. To improve the effectiveness of existing networks in analyzing point cloud data, we propose a plug-and-play module, PnP-3D, aiming to refine the fundamental point cloud feature representations by involving more local context and global bilinear response from explicit 3D space and implicit feature space. To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks, including classification, semantic segmentation, and object detection, where we select three state-of-the-art networks from each task for evaluation. Serving as a plug-and-play module, PnP-3D can significantly boost the performances of established networks. In addition to achieving state-of-the-art results on four widely used point cloud benchmarks, we present comprehensive ablation studies and visualizations to demonstrate our approachs advantages. The code will be available at https://github.com/ShiQiu0419/pnp-3d.
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve v arious 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا