ترغب بنشر مسار تعليمي؟ اضغط هنا

Change Detection in Synthetic Aperture Radar Images Using a Dual-Domain Network

65   0   0.0 ( 0 )
 نشر من قبل Feng Gao
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Change detection from synthetic aperture radar (SAR) imagery is a critical yet challenging task. Existing methods mainly focus on feature extraction in spatial domain, and little attention has been paid to frequency domain. Furthermore, in patch-wise feature analysis, some noisy features in the marginal region may be introduced. To tackle the above two challenges, we propose a Dual-Domain Network. Specifically, we take features from the discrete cosine transform domain into consideration and the reshaped DCT coefficients are integrated into the proposed model as the frequency domain branch. Feature representations from both frequency and spatial domain are exploited to alleviate the speckle noise. In addition, we further propose a multi-region convolution module, which emphasizes the central region of each patch. The contextual information and central region features are modeled adaptively. The experimental results on three SAR datasets demonstrate the effectiveness of the proposed model. Our codes are available at https://github.com/summitgao/SAR_CD_DDNet.

قيم البحث

اقرأ أيضاً

Many researches have been carried out for change detection using temporal SAR images. In this paper an algorithm for change detection using SAR videos has been proposed. There are various challenges related to SAR videos such as high level of speckle noise, rotation of SAR image frames of the video around a particular axis due to the circular movement of airborne vehicle, non-uniform back scattering of SAR pulses. Hence conventional change detection algorithms used for optical videos and SAR temporal images cannot be directly utilized for SAR videos. We propose an algorithm which is a combination of optical flow calculation using Lucas Kanade (LK) method and blob detection. The developed method follows a four steps approach: image filtering and enhancement, applying LK method, blob analysis and combining LK method with blob analysis. The performance of the developed approach was tested on SAR videos available on Sandia National Laboratories website and SAR videos generated by a SAR simulator.
Data and data sources have become increasingly essential in recent decades. Scientists and researchers require more data to deploy AI approaches as the field continues to improve. In recent years, the rapid technological advancements have had a signi ficant impact on human existence. One major field for collecting data is satellite technology. With the fast development of various satellite sensor equipment, synthetic aperture radar (SAR) images have become an important source of data for a variety of research subjects, including environmental studies, urban studies, coastal extraction, water sources, etc. Change detection and coastline detection are both achieved using SAR pictures. However, speckle noise is a major problem in SAR imaging. Several solutions have been offered to address this issue. One solution is to expose SAR images to spatial fuzzy clustering. Another solution is to separate speech. This study utilises the spatial function to overcome speckle noise and cluster the SAR images with the highest achieved accuracy. The spatial function is proposed in this work since the likelihood of data falling into one cluster is what this function is all about. When the spatial function is employed to cluster data in fuzzy logic, the clustering outcomes improve. The proposed clustering technique is us
We propose a saliency-based, multi-target detection and segmentation framework for multi-aspect, semi-coherent imagery formed from circular-scan, synthetic-aperture sonar (CSAS). Our framework relies on a multi-branch, convolutional encoder-decoder n etwork (MB-CEDN). The encoder portion extracts features from one or more CSAS images of the targets. These features are then split off and fed into multiple decoders that perform pixel-level classification on the extracted features to roughly mask the target in an unsupervised-trained manner and detect foreground and background pixels in a supervised-trained manner. Each of these target-detection estimates provide different perspectives as to what constitute a target. These opinions are cascaded into a deep-parsing network to model contextual and spatial constraints that help isolate targets better than either solution estimate alone. We evaluate our framework using real-world CSAS data with five broad target classes. Since we are the first to consider both CSAS target detection and segmentation, we adapt existing image and video-processing network topologies from the literature for comparative purposes. We show that our framework outperforms supervised deep networks. It greatly outperforms state-of-the-art unsupervised approaches for diverse target and seafloor types.
231 - Lei Ding , Haitao Guo , Sicong Liu 2021
Semantic change detection (SCD) extends the multi-class change detection (MCD) task to provide not only the change locations but also the detailed land-cover/land-use (LCLU) categories before and after the observation intervals. This fine-grained sem antic change information is very useful in many applications. Recent studies indicate that the SCD can be modeled through a triple-branch Convolutional Neural Network (CNN), which contains two temporal branches and a change branch. However, in this architecture, the communications between the temporal branches and the change branch are insufficient. To overcome the limitations in existing methods, we propose a novel CNN architecture for the SCD, where the semantic temporal features are merged in a deep CD unit. Furthermore, we elaborate on this architecture to reason the bi-temporal semantic correlations. The resulting Bi-temporal Semantic Reasoning Network (Bi-SRNet) contains two types of semantic reasoning blocks to reason both single-temporal and cross-temporal semantic correlations, as well as a novel loss function to improve the semantic consistency of change detection results. Experimental results on a benchmark dataset show that the proposed architecture obtains significant accuracy improvements over the existing approaches, while the added designs in the Bi-SRNet further improves the segmentation of both semantic categories and the changed areas. The codes in this paper are accessible at: github.com/ggsDing/Bi-SRNet.
Although deep learning has achieved great success in image classification tasks, its performance is subject to the quantity and quality of training samples. For classification of polarimetric synthetic aperture radar (PolSAR) images, it is nearly imp ossible to annotate the images from visual interpretation. Therefore, it is urgent for remote sensing scientists to develop new techniques for PolSAR image classification under the condition of very few training samples. In this letter, we take the advantage of active learning and propose active ensemble deep learning (AEDL) for PolSAR image classification. We first show that only 35% of the predicted labels of a deep learning models snapshots near its convergence were exactly the same. The disagreement between snapshots is non-negligible. From the perspective of multiview learning, the snapshots together serve as a good committee to evaluate the importance of unlabeled instances. Using the snapshots committee to give out the informativeness of unlabeled data, the proposed AEDL achieved better performance on two real PolSAR images compared with standard active learning strategies. It achieved the same classification accuracy with only 86% and 55% of the training samples compared with breaking ties active learning and random selection for the Flevoland dataset.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا