DAPnet: A Double Self-attention Convolutional Network for Point Cloud Semantic Labeling


الملخص بالإنكليزية

Airborne Laser Scanning (ALS) point clouds have complex structures, and their 3D semantic labeling has been a challenging task. It has three problems: (1) the difficulty of classifying point clouds around boundaries of objects from different classes, (2) the diversity of shapes within the same class, and (3) the scale differences between classes. In this study, we propose a novel double self-attention convolutional network called the DAPnet. The double self-attention includes the point attention module (PAM) and the group attention module (GAM). For problem (1), the PAM can effectively assign different weights based on the relevance of point clouds in adjacent areas. Meanwhile, for the problem (2), the GAM enhances the correlation between groups, i.e., grouped features within the same classes. To solve the problem (3), we adopt a multiscale radius to construct the groups and concatenate extracted hierarchical features with the output of the corresponding upsampling process. Under the ISPRS 3D Semantic Labeling Contest dataset, the DAPnet outperforms the benchmark by 85.2% with an overall accuracy of 90.7%. By conducting ablation comparisons, we find that the PAM effectively improves the model than the GAM. The incorporation of the double self-attention module has an average of 7% improvement on the pre-class accuracy. Plus, the DAPnet consumes a similar training time to those without the attention modules for model convergence. The DAPnet can assign different weights to features based on the relevance between point clouds and their neighbors, which effectively improves classification performance. The source codes are available at: https://github.com/RayleighChen/point-attention.

تحميل البحث