Do you want to publish a course? Click here

Fully Polarimetric SAR and Single-Polarization SAR Image Fusion Network

177   0   0.0 ( 0 )
 Added by Liupeng Lin
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The data fusion technology aims to aggregate the characteristics of different data and obtain products with multiple data advantages. To solves the problem of reduced resolution of PolSAR images due to system limitations, we propose a fully polarimetric synthetic aperture radar (PolSAR) images and single-polarization synthetic aperture radar SAR (SinSAR) images fusion network to generate high-resolution PolSAR (HR-PolSAR) images. To take advantage of the polarimetric information of the low-resolution PolSAR (LR-PolSAR) image and the spatial information of the high-resolution single-polarization SAR (HR-SinSAR) image, we propose a fusion framework for joint LR-PolSAR image and HR-SinSAR image and design a cross-attention mechanism to extract features from the joint input data. Besides, based on the physical imaging mechanism, we designed the PolSAR polarimetric loss function for constrained network training. The experimental results confirm the superiority of fusion network over traditional algorithms. The average PSNR is increased by more than 3.6db, and the average MAE is reduced to less than 0.07. Experiments on polarimetric decomposition and polarimetric signature show that it maintains polarimetric information well.



rate research

Read More

83 - Meiyu Huang , Yao Xu , Lixin Qian 2021
Deep learning techniques have made an increasing impact on the field of remote sensing. However, deep neural networks based fusion of multimodal data from different remote sensors with heterogenous characteristics has not been fully explored, due to the lack of availability of big amounts of perfectly aligned multi-sensor image data with diverse scenes of high resolutions, especially for synthetic aperture radar (SAR) data and optical imagery. To promote the development of deep learning based SAR-optical fusion approaches, we release the QXS-SAROPT dataset, which contains 20,000 pairs of SAR-optical image patches. We obtain the SAR patches from SAR satellite GaoFen-3 images and the optical patches from Google Earth images. These images cover three port cities: San Diego, Shanghai and Qingdao. Here, we present a detailed introduction of the construction of the dataset, and show its two representative exemplary applications, namely SAR-optical image matching and SAR ship detection boosted by cross-modal information from optical images. As a large open SAR-optical dataset with multiple scenes of a high resolution, we believe QXS-SAROPT will be of potential value for further research in SAR-optical data fusion technology based on deep learning.
Object retrieval and reconstruction from very high resolution (VHR) synthetic aperture radar (SAR) images are of great importance for urban SAR applications, yet highly challenging owing to the complexity of SAR data. This paper addresses the issue of individual building segmentation from a single VHR SAR image in large-scale urban areas. To achieve this, we introduce building footprints from GIS data as complementary information and propose a novel conditional GIS-aware network (CG-Net). The proposed model learns multi-level visual features and employs building footprints to normalize the features for predicting building masks in the SAR image. We validate our method using a high resolution spotlight TerraSAR-X image collected over Berlin. Experimental results show that the proposed CG-Net effectively brings improvements with variant backbones. We further compare two representations of building footprints, namely complete building footprints and sensor-visible footprint segments, for our task, and conclude that the use of the former leads to better segmentation results. Moreover, we investigate the impact of inaccurate GIS data on our CG-Net, and this study shows that CG-Net is robust against positioning errors in GIS data. In addition, we propose an approach of ground truth generation of buildings from an accurate digital elevation model (DEM), which can be used to generate large-scale SAR image datasets. The segmentation results can be applied to reconstruct 3D building models at level-of-detail (LoD) 1, which is demonstrated in our experiments.
The effective combination of the complementary information provided by the huge amount of unlabeled multi-sensor data (e.g., Synthetic Aperture Radar (SAR), optical images) is a critical topic in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meaningful feature representations from multi-view data. However, these methods only focus on the image-level features, which may not satisfy the requirement for dense prediction tasks such as the land-cover mapping. In this work, we propose a new self-supervised approach to SAR-optical data fusion that can learn disentangled pixel-wise feature representations directly by taking advantage of both multi-view contrastive loss and the bootstrap your own latent (BYOL) methods. Two key contributions of the proposed approach are a multi-view contrastive loss to encode the multimodal images and a shift operation to reconstruct learned representations for each pixel by building the local consistency between different augmented views. In the experimental period, we first verified the effectiveness of multi-view contrastive loss and BYOL in self-supervised learning on SAR-optical fusion using an image-level classification task. Then we validated the proposed approach on a land-cover mapping task by training it with unlabeled SAR-optical image pairs. There we used labeled data pairs to evaluate the discriminative capability of learned features in downstream tasks. Results show that the proposed approach extracts features that result in higher accuracy and that reduces the dimension of representations with respect to the image-level contrastive learning method.
162 - Xiang Chen , Yufeng Huang , Lei Xu 2021
Rain streaks bring serious blurring and visual quality degradation, which often vary in size, direction and density. Current CNN-based methods achieve encouraging performance, while are limited to depict rain characteristics and recover image details in the poor visibility environment. To address these issues, we present a Multi-scale Hourglass Hierarchical Fusion Network (MH2F-Net) in end-to-end manner, to exactly captures rain streak features with multi-scale extraction, hierarchical distillation and information aggregation. For better extracting the features, a novel Multi-scale Hourglass Extraction Block (MHEB) is proposed to get local and global features across different scales through down- and up-sample process. Besides, a Hierarchical Attentive Distillation Block (HADB) then employs the dual attention feature responses to adaptively recalibrate the hierarchical features and eliminate the redundant ones. Further, we introduce a Residual Projected Feature Fusion (RPFF) strategy to progressively discriminate feature learning and aggregate different features instead of directly concatenating or adding. Extensive experiments on both synthetic and real rainy datasets demonstrate the effectiveness of the designed MH2F-Net by comparing with recent state-of-the-art deraining algorithms. Our source code will be available on the GitHub: https://github.com/cxtalk/MH2F-Net.
110 - Yuanxin Ye , Chao Yang , Bai Zhu 2020
Co-registering the Sentinel-1 SAR and Sentinel-2 optical data of European Space Agency (ESA) is of great importance for many remote sensing applications. However, we find that there are evident misregistration shifts between the Sentinel-1 SAR and Sentinel-2 optical images that are directly downloaded from the official website. To address that, this paper presents a fast and effective registration method for the two types of images. In the proposed method, a block-based scheme is first designed to extract evenly distributed interest points. Then the correspondences are detected by using the similarity of structural features between the SAR and optical images, where the three dimension (3D) phase correlation (PC) is used as the similarity measure for accelerating image matching. Finally, the obtained correspondences are employed to measure the misregistration shifts between the images. Moreover, to eliminate the misregistration, we use some representative geometric transformation models such as polynomial models, projective models, and rational function models for the co-registration of the two types of images, and compare and analyze their registration accuracy under different numbers of control points and different terrains. Six pairs of the Sentinel-1 SAR L1 and Sentinel-2 optical L1C images covering three different terrains are tested in our experiments. Experimental results show that the proposed method can achieve precise correspondences between the images, and the 3rd. Order polynomial achieves the most satisfactory registration results. Its registration accuracy of the flat areas is less than 1.0 10m pixels, and that of the hilly areas is about 1.5 10m pixels, and that of the mountainous areas is between 1.7 and 2.3 10m pixels, which significantly improves the co-registration accuracy of the Sentinel-1 SAR and Sentinel-2 optical images.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا