ترغب بنشر مسار تعليمي؟ اضغط هنا

Nonlinear regularization techniques for seismic tomography

164   0   0.0 ( 0 )
 نشر من قبل Ignace Loris
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, $ell_2$ penalties are compared to so-called sparsity promoting $ell_1$ and $ell_0$ penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an $ell_2$ norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer $ell_1$ damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple $ell_2$ minimization (`Tikhonov regularization) which should be avoided. In some of our examples, the $ell_0$ method produced notable artifacts. In addition we show how nonlinear $ell_1$ methods for finding sparse models can be competitive in speed with the widely used $ell_2$ methods, certainly under noisy conditions, so that there is no need to shun $ell_1$ penalizations.



قيم البحث

اقرأ أيضاً

This paper introduces novel deep recurrent neural network architectures for Velocity Model Building (VMB), which is beyond what Araya-Polo et al 2018 pioneered with the Machine Learning-based seismic tomography built with convolutional non-recurrent neural network. Our investigation includes the utilization of basic recurrent neural network (RNN) cells, as well as Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) cells. Performance evaluation reveals that salt bodies are consistently predicted more accurately by GRU and LSTM-based architectures, as compared to non-recurrent architectures. The results take us a step closer to the final goal of a reliable fully Machine Learning-based tomography from pre-stack data, which when achieved will reduce the VMB turnaround from weeks to days.
A qualitative comparison of total variation like penalties (total variation, Huber variant of total variation, total generalized variation, ...) is made in the context of global seismic tomography. Both penalized and constrained formulations of seism ic recovery problems are treated. A number of simple iterative recovery algorithms applicable to these problems are described. The convergence speed of these algorithms is compared numerically in this setting. For the constrained formulation a new algorithm is proposed and its convergence is proven.
92 - G. Pagliara , G. Vignoli 2006
We present an algorithm for focusing inversion of electrical resistivity tomography (ERT) data. ERT is a typical example of ill-posed problem. Regularization is the most common way to face this kind of problems; it basically consists in using a prior i information about targets to reduce the ambiguity and the instability of the solution. By using the minimum gradient support (MGS) stabilizing functional, we introduce the following geometrical prior information in the reconstruction process: anomalies have sharp boundaries. The presented work is embedded in a project (L.A.R.A.) which aims at the estimation of hydrogeological properties from geophysical investigations. L.A.R.A. facilities include a simulation tank (4 m x 8 m x 1.35 m); 160 electrodes are located all around the tank and used for 3-D ERT. Because of the large number of electrodes and their dimensions, it is important to model their effect in order to correctly evaluate the electrical system response. The forward modelling in the presented algorithm is based on the so-called complete electrode model that takes into account the presence of the electrodes and their contact impedances. In this paper, we compare the results obtained with different regularizing functionals applied on a synthetic model.
Incorporating prior knowledge on model unknowns of interest is essential when dealing with ill-posed inverse problems due to the nonuniqueness of the solution and data noise. Unfortunately, it is not trivial to fully describe our priors in a convenie nt and analytical way. Parameterizing the unknowns with a convolutional neural network (CNN), and assuming an uninformative Gaussian prior on its weights, leads to a variational prior on the output space that favors natural images and excludes noisy artifacts, as long as overfitting is prevented. This is the so-called deep-prior approach. In seismic imaging, however, evaluating the forward operator is computationally expensive, and training a randomly initialized CNN becomes infeasible. We propose, instead, a weak version of deep priors, which consists of relaxing the requirement that reflectivity models must lie in the network range, and letting the unknowns deviate from the network output according to a Gaussian distribution. Finally, we jointly solve for the reflectivity model and CNN weights. The chief advantage of this approach is that the updates for the CNN weights do not involve the modeling operator, and become relatively cheap. Our synthetic numerical experiments demonstrate that the weak deep prior is more robust with respect to noise than conventional least-squares imaging approaches, with roughly twice the computational cost of reverse-time migration, which is the affordable computational budget in large-scale imaging problems.
352 - A. Saichev , D. Sornette 2017
Using the standard ETAS model of triggered seismicity, we present a rigorous theoretical analysis of the main statistical properties of temporal clusters, defined as the group of events triggered by a given main shock of fixed magnitude m that occurr ed at the origin of time, at times larger than some present time t. Using the technology of generating probability function (GPF), we derive the explicit expressions for the GPF of the number of future offsprings in a given temporal seismic cluster, defining, in particular, the statistics of the clusters duration and the clusters offsprings maximal magnitudes. We find the remarkable result that the magnitude difference between the largest and second largest event in the future temporal cluster is distributed according to the regular Gutenberg-Richer law that controls the unconditional distribution of earthquake magnitudes. For earthquakes obeying the Omori-Utsu law for the distribution of waiting times between triggering and triggered events, we show that the distribution of the durations of temporal clusters of events of magnitudes above some detection threshold u has a power law tail that is fatter in the non-critical regime $n<1$ than in the critical case n=1. This paradoxical behavior can be rationalised from the fact that generations of all orders cascade very fast in the critical regime and accelerate the temporal decay of the cluster dynamics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا