ترغب بنشر مسار تعليمي؟ اضغط هنا

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds

160   0   0.0 ( 0 )
 نشر من قبل Kaizhi Yang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Representing complex 3D objects as simple geometric primitives, known as shape abstraction, is important for geometric modeling, structural analysis, and shape synthesis. In this paper, we propose an unsupervised shape abstraction method to map a point cloud into a compact cuboid representation. We jointly predict cuboid allocation as part segmentation and cuboid shapes and enforce the consistency between the segmentation and shape abstraction for self-learning. For the cuboid abstraction task, we transform the input point cloud into a set of parametric cuboids using a variational auto-encoder network. The segmentation network allocates each point into a cuboid considering the point-cuboid affinity. Without manual annotations of parts in point clouds, we design four novel losses to jointly supervise the two branches in terms of geometric similarity and cuboid compactness. We evaluate our method on multiple shape collections and demonstrate its superiority over existing shape abstraction methods. Moreover, based on our network architecture and learned representations, our approach supports various applications including structured shape generation, shape interpolation, and structural shape clustering.



قيم البحث

اقرأ أيضاً

We develop a novel learning scheme named Self-Prediction for 3D instance and semantic segmentation of point clouds. Distinct from most existing methods that focus on designing convolutional operators, our method designs a new learning scheme to enhan ce point relation exploring for better segmentation. More specifically, we divide a point cloud sample into two subsets and construct a complete graph based on their representations. Then we use label propagation algorithm to predict labels of one subset when given labels of the other subset. By training with this Self-Prediction task, the backbone network is constrained to fully explore relational context/geometric/shape information and learn more discriminative features for segmentation. Moreover, a general associated framework equipped with our Self-Prediction scheme is designed for enhancing instance and semantic segmentation simultaneously, where instance and semantic representations are combined to perform Self-Prediction. Through this way, instance and semantic segmentation are collaborated and mutually reinforced. Significant performance improvements on instance and semantic segmentation compared with baseline are achieved on S3DIS and ShapeNet. Our method achieves state-of-the-art instance segmentation results on S3DIS and comparable semantic segmentation results compared with state-of-the-arts on S3DIS and ShapeNet when we only take PointNet++ as the backbone network.
Although unsupervised feature learning has demonstrated its advantages to reducing the workload of data labeling and network design in many fields, existing unsupervised 3D learning methods still cannot offer a generic network for various shape analy sis tasks with competitive performance to supervised methods. In this paper, we propose an unsupervised method for learning a generic and efficient shape encoding network for different shape analysis tasks. The key idea of our method is to jointly encode and learn shape and point features from unlabeled 3D point clouds. For this purpose, we adapt HR-Net to octree-based convolutional neural networks for jointly encoding shape and point features with fused multiresolution subnetworks and design a simple-yet-efficient Multiresolution Instance Discrimination (MID) loss for jointly learning the shape and point features. Our network takes a 3D point cloud as input and output both shape and point features. After training, the network is concatenated with simple task-specific back-end layers and fine-tuned for different shape analysis tasks. We evaluate the efficacy and generality of our method and validate our network and loss design with a set of shape analysis tasks, including shape classification, semantic shape segmentation, as well as shape registration tasks. With simple back-ends, our network demonstrates the best performance among all unsupervised methods and achieves competitive performance to supervised methods, especially in tasks with a small labeled dataset. For fine-grained shape segmentation, our method even surpasses existing supervised methods by a large margin.
Point clouds provide a compact and efficient representation of 3D shapes. While deep neural networks have achieved impressive results on point cloud learning tasks, they require massive amounts of manually labeled data, which can be costly and time-c onsuming to collect. In this paper, we leverage 3D self-supervision for learning downstream tasks on point clouds with fewer labels. A point cloud can be rotated in infinitely many ways, which provides a rich label-free source for self-supervision. We consider the auxiliary task of predicting rotations that in turn leads to useful features for other tasks such as shape classification and 3D keypoint prediction. Using experiments on ShapeNet and ModelNet, we demonstrate that our approach outperforms the state-of-the-art. Moreover, features learned by our model are complementary to other self-supervised methods and combining them leads to further performance improvement.
294 - Xiang Li , Lingjing Wang , Yi Fang 2020
We propose a self-supervised method for partial point set registration. While recent proposed learning-based methods have achieved impressive registration performance on the full shape observations, these methods mostly suffer from performance degrad ation when dealing with partial shapes. To bridge the performance gaps between partial point set registration with full point set registration, we proposed to incorporate a shape completion network to benefit the registration process. To achieve this, we design a latent code for each pair of shapes, which can be regarded as a geometric encoding of the target shape. By doing so, our model does need an explicit feature embedding network to learn the feature encodings. More importantly, both our shape completion network and the point set registration network take the shared latent codes as input, which are optimized along with the parameters of two decoder networks in the training process. Therefore, the point set registration process can thus benefit from the joint optimization process of latent codes, which are enforced to represent the information of full shape instead of partial ones. In the inference stage, we fix the network parameter and optimize the latent codes to get the optimal shape completion and registration results. Our proposed method is pure unsupervised and does not need any ground truth supervision. Experiments on the ModelNet40 dataset demonstrate the effectiveness of our model for partial point set registration.
In this work, we propose UPDesc, an unsupervised method to learn point descriptors for robust point cloud registration. Our work builds upon a recent supervised 3D CNN-based descriptor extraction framework, namely, 3DSmoothNet, which leverages a voxe l-based representation to parameterize the surrounding geometry of interest points. Instead of using a predefined fixed-size local support in voxelization, which potentially limits the access of richer local geometry information, we propose to learn the support size in a data-driven manner. To this end, we design a differentiable voxelization module that can back-propagate gradients to the support size optimization. To optimize descriptor similarity, the prior 3D CNN work and other supervised methods require abundant correspondence labels or pose annotations of point clouds for crafting metric learning losses. Differently, we show that unsupervised learning of descriptor similarity can be achieved by performing geometric registration in networks. Our learning objectives consider descriptor similarity both across and within point clouds without supervision. Through extensive experiments on point cloud registration benchmarks, we show that our learned descriptors yield superior performance over existing unsupervised methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا