No Arabic abstract
Seismic data quality is vital to geophysical applications, so methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the dataset without introducing pseudo-Gibbs artifacts when compared to other directional multiscale transform methods such as curvelets.
We report an empirical determination of the probability density functions P(r) of the number r of earthquakes in finite space-time windows for the California catalog, over fixed spatial boxes 5 x 5 km^2 and time intervals dt =1, 10, 100 and 1000 days. We find a stable power law tail P(r) ~ 1/r^{1+mu} with exponent mu approx 1.6 for all time intervals. These observations are explained by a simple stochastic branching process previously studied by many authors, the ETAS (epidemic-type aftershock sequence) model which assumes that each earthquake can trigger other earthquakes (``aftershocks). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. We develop the full theory in terms of generating functions for describing the space-time organization of earthquake sequences and develop several approximations to solve the equations. The calibration of the theory to the empirical observations shows that it is essential to augment the ETAS model by taking account of the pre-existing frozen heterogeneity of spontaneous earthquake sources. This seems natural in view of the complex multi-scale nature of fault networks, on which earthquakes nucleate. Our extended theory is able to account for the empirical observation satisfactorily. In particular, the adjustable parameters are determined by fitting the largest time window $dt=1000$ days and are then used as frozen in the formulas for other time scales, with very good agreement with the empirical data.
We report on a novel stochastic analysis of seismic time series for the Earths vertical velocity, by using methods originally developed for complex hierarchical systems, and in particular for turbulent flows. Analysis of the fluctuations of the detrended increments of the series reveals a pronounced change of the shapes of the probability density functions (PDF) of the series increments. Before and close to an earthquake the shape of the PDF and the long-range correlation in the increments both manifest significant changes. For a moderate or large-size earthquake the typical time at which the PDF undergoes the transition from a Gaussian to a non-Gaussian is about 5-10 hours. Thus, the transition represents a new precursor for detecting such earthquakes.
Spectral variability is one of the major issue when conducting hyperspectral unmixing. Within a given image composed of some elementary materials (herein referred to as endmember classes), the spectral signature characterizing these classes may spatially vary due to intrinsic component fluctuations or external factors (illumination). These redundant multiple endmember spectra within each class adversely affect the performance of unmixing methods. This paper proposes a mixing model that explicitly incorporates a hierarchical structure of redundant multiple spectra representing each class. The proposed method is designed to promote sparsity on the selection of both spectra and classes within each pixel. The resulting unmixing algorithm is able to adaptively recover several bundles of endmember spectra associated with each class and robustly estimate abundances. In addition, its flexibility allows a variable number of classes to be present within each pixel of the hyperspectral image to be unmixed. The proposed method is compared with other state-of-the-art unmixing methods that incorporate sparsity using both simulated and real hyperspectral data. The results show that the proposed method can successfully determine the variable number of classes present within each class and estimate the corresponding class abundances.
Electric signals have been recently recorded at the Earths surface with amplitudes appreciably larger than those hitherto reported. Their entropy in natural time is smaller than that, $S_u$, of a ``uniform distribution. The same holds for their entropy upon time-reversal. This behavior, as supported by numerical simulations in fBm time series and in an on-off intermittency model, stems from infinitely ranged long range temporal correlations and hence these signals are probably Seismic Electric Signals (critical dynamics). The entropy fluctuations are found to increase upon approaching bursting, which reminds the behavior identifying sudden cardiac death individuals when analysing their electrocardiograms.
Earthquakes can be detected by matching spatial patterns or phase properties from 1-D seismic waves. Current earthquake detection methods, such as waveform correlation and template matching, have difficulty detecting anomalous earthquakes that are not similar to other earthquakes. In recent years, machine-learning techniques for earthquake detection have been emerging as a new active research direction. In this paper, we develop a novel earthquake detection method based on dictionary learning. Our detection method first generates rich features via signal processing and statistical methods and further employs feature selection techniques to choose features that carry the most significant information. Based on these selected features, we build a dictionary for classifying earthquake events from non-earthquake events. To evaluate the performance of our dictionary-based detection methods, we test our method on a labquake dataset from Penn State University, which contains 3,357,566 time series data points with a 400 MHz sampling rate. 1,000 earthquake events are manually labeled in total, and the length of these earthquake events varies from 74 to 7151 data points. Through comparison to other detection methods, we show that our feature selection and dictionary learning incorporated earthquake detection method achieves an 80.1% prediction accuracy and outperforms the baseline methods in earthquake detection, including Template Matching (TM) and Support Vector Machine (SVM).