Hyperspectral Image Classification with Spatial Consistence Using Fully Convolutional Spatial Propagation Network


الملخص بالإنكليزية

In recent years, deep convolutional neural networks (CNNs) have shown impressive ability to represent hyperspectral images (HSIs) and achieved encouraging results in HSI classification. However, the existing CNN-based models operate at the patch-level, in which pixel is separately classified into classes using a patch of images around it. This patch-level classification will lead to a large number of repeated calculations, and it is difficult to determine the appropriate patch size that is beneficial to classification accuracy. In addition, the conventional CNN models operate convolutions with local receptive fields, which cause failures in modeling contextual spatial information. To overcome the aforementioned limitations, we propose a novel end-to-end, pixels-to-pixels fully convolutional spatial propagation network (FCSPN) for HSI classification. Our FCSPN consists of a 3D fully convolution network (3D-FCN) and a convolutional spatial propagation network (CSPN). Specifically, the 3D-FCN is firstly introduced for reliable preliminary classification, in which a novel dual separable residual (DSR) unit is proposed to effectively capture spectral and spatial information simultaneously with fewer parameters. Moreover, the channel-wise attention mechanism is adapted in the 3D-FCN to grasp the most informative channels from redundant channel information. Finally, the CSPN is introduced to capture the spatial correlations of HSI via learning a local linear spatial propagation, which allows maintaining the HSI spatial consistency and further refining the classification results. Experimental results on three HSI benchmark datasets demonstrate that the proposed FCSPN achieves state-of-the-art performance on HSI classification.

تحميل البحث