ترغب بنشر مسار تعليمي؟ اضغط هنا

Hyperspectral unmixing with spectral variability using adaptive bundles and double sparsity

139   0   0.0 ( 0 )
 نشر من قبل Nicolas Dobigeon
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Spectral variability is one of the major issue when conducting hyperspectral unmixing. Within a given image composed of some elementary materials (herein referred to as endmember classes), the spectral signature characterizing these classes may spatially vary due to intrinsic component fluctuations or external factors (illumination). These redundant multiple endmember spectra within each class adversely affect the performance of unmixing methods. This paper proposes a mixing model that explicitly incorporates a hierarchical structure of redundant multiple spectra representing each class. The proposed method is designed to promote sparsity on the selection of both spectra and classes within each pixel. The resulting unmixing algorithm is able to adaptively recover several bundles of endmember spectra associated with each class and robustly estimate abundances. In addition, its flexibility allows a variable number of classes to be present within each pixel of the hyperspectral image to be unmixed. The proposed method is compared with other state-of-the-art unmixing methods that incorporate sparsity using both simulated and real hyperspectral data. The results show that the proposed method can successfully determine the variable number of classes present within each class and estimate the corresponding class abundances.



قيم البحث

اقرأ أيضاً

78 - Lingchen Zhu , Entao Liu , 2017
Seismic data quality is vital to geophysical applications, so methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the dataset without introducing pseudo-Gibbs artifacts when compared to other directional multiscale transform methods such as curvelets.
When no arterial input function is available, quantification of dynamic PET images requires a previous step devoted to the extraction of a reference time-activity curve (TAC). Factor analysis is often applied for this purpose. This paper introduces a novel approach that conducts a new kind of nonlinear factor analysis relying on a compartment model, and computes the kinetic parameters of specific binding tissues jointly. To this end, it capitalizes on data-driven parametric imaging methods to provide a physical description of the underlying PET data, directly relating the specific binding with the kinetics of the non-specific binding in the corresponding tissues. This characterization is introduced into the factor analysis formulation to yield a novel nonlinear unmixing model designed for PET image analysis. This model also explicitly introduces global kinetic parameters that allow for a direct estimation of the binding potential with respect to the free fractions in each non-specific binding tissue. The performance of the method is evaluated on synthetic and real data to demonstrate its potential interest.
Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing, yet their ability to simultaneously generalize various spectral variabilities and extract physically me aningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various spectral variabilities. Inspired by the powerful learning ability of deep learning, we attempt to develop a general deep learning approach for hyperspectral unmixing, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly-pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., non-negativity and sum-to-one) towards a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixel-wise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial-spectral unmixing. Experimental results conducted on three different datasets with the ground-truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net.
The direct detection of exoplanets with high-contrast instruments can be boosted with high spectral resolution. For integral field spectrographs yielding hyperspectral data, this means that the field of view consists of diffracted starlight spectra a nd a spatially localized planet. Analysis usually relies on cross-correlation with theoretical spectra. In a purely blind-search context, this supervised strategy can be biased with model mismatch and/or be computationally inefficient. Using an approach that is inspired by the remote-sensing community, we aim to propose an alternative to cross-correlation that is fully data-driven, which decomposes the data into a set of individual spectra and their corresponding spatial distributions. This strategy is called spectral unmixing. We used an orthogonal subspace projection to identify the most distinct spectra in the field of view. Their spatial distribution maps were then obtained by inverting the data. These spectra were then used to break the original hyperspectral images into their corresponding spatial distribution maps via non-negative least squares. The performance of our method was evaluated and compared with a cross-correlation using simulated hyperspectral data with medium resolution from the ELT/HARMONI integral field spectrograph. We show that spectral unmixing effectively leads to a planet detection solely based on spectral dissimilarities at significantly reduced computational cost. The extracted spectrum holds significant signatures of the planet while being not perfectly separated from residual starlight. The sensitivity of the supervised cross-correlation is three to four times higher than with unsupervised spectral unmixing, the gap is biased toward the former because the injected and correlated spectrum match perfectly. The algorithm was furthermore vetted on real data obtained with VLT/SINFONI of the beta Pictoris system.
To analyze dynamic positron emission tomography (PET) images, various generic multivariate data analysis techniques have been considered in the literature, such as principal component analysis (PCA), independent component analysis (ICA), factor analy sis and nonnegative matrix factorization (NMF). Nevertheless, these conventional approaches neglect any possible nonlinear variations in the time activity curves describing the kinetic behavior of tissues with specific binding, which limits their ability to recover a reliable, understandable and interpretable description of the data. This paper proposes an alternative analysis paradigm that accounts for spatial fluctuations in the exchange rate of the tracer between a free compartment and a specifically bound ligand compartment. The method relies on the concept of linear unmixing, usually applied on the hyperspectral domain, which combines NMF with a sum-to-one constraint that ensures an exhaustive description of the mixtures. The spatial variability of the signature corresponding to the specific binding tissue is explicitly modeled through a perturbed component. The performance of the method is assessed on both synthetic and real data and is shown to compete favorably when compared to other conventional analysis methods. The proposed method improved both factor estimation and proportions extraction for specific binding. Modeling the variability of the specific binding factor has a strong potential impact for dynamic PET image analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا