ترغب بنشر مسار تعليمي؟ اضغط هنا

NuSPAN: A Proximal Average Network for Nonuniform Sparse Model -- Application to Seismic Reflectivity Inversion

76   0   0.0 ( 0 )
 نشر من قبل Swapnil Mache
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We solve the problem of sparse signal deconvolution in the context of seismic reflectivity inversion, which pertains to high-resolution recovery of the subsurface reflection coefficients. Our formulation employs a nonuniform, non-convex synthesis sparse model comprising a combination of convex and non-convex regularizers, which results in accurate approximations of the l0 pseudo-norm. The resulting iterative algorithm requires the proximal average strategy. When unfolded, the iterations give rise to a learnable proximal average network architecture that can be optimized in a data-driven fashion. We demonstrate the efficacy of the proposed approach through numerical experiments on synthetic 1-D seismic traces and 2-D wedge models in comparison with the benchmark techniques. We also present validations considering the simulated Marmousi2 model as well as real 3-D seismic volume data acquired from the Penobscot 3D survey off the coast of Nova Scotia, Canada.

قيم البحث

اقرأ أيضاً

Seismic wave propagation forms the basis for most aspects of seismological research, yet solving the wave equation is a major computational burden that inhibits the progress of research. This is exaspirated by the fact that new simulations must be pe rformed when the velocity structure or source location is perturbed. Here, we explore a prototype framework for learning general solutions using a recently developed machine learning paradigm called Neural Operator. A trained Neural Operator can compute a solution in negligible time for any velocity structure or source location. We develop a scheme to train Neural Operators on an ensemble of simulations performed with random velocity models and source locations. As Neural Operators are grid-free, it is possible to evaluate solutions on higher resolution velocity models than trained on, providing additional computational efficiency. We illustrate the method with the 2D acoustic wave equation and demonstrate the methods applicability to seismic tomography, using reverse mode automatic differentiation to compute gradients of the wavefield with respect to the velocity structure. The developed procedure is nearly an order of magnitude faster than using conventional numerical methods for full waveform inversion.
Most of the seismic inversion techniques currently proposed focus on robustness with respect to the background model choice or inaccurate physical modeling assumptions, but are not apt to large-scale 3D applications. On the other hand, methods that a re computationally feasible for industrial problems, such as full waveform inversion, are notoriously bogged down by local minima and require adequate starting models. We propose a novel solution that is both scalable and less sensitive to starting model or inaccurate physics when compared to full waveform inversion. The method is based on a dual (Lagrangian) reformulation of the classical wavefield reconstruction inversion, whose robustness with respect to local minima is well documented in the literature. However, it is not suited to 3D, as it leverages expensive frequency-domain solvers for the wave equation. The proposed reformulation allows the deployment of state-of-the-art time-domain finite-difference methods, and is computationally mature for industrial scale problems.
Achieving desirable receiver sampling in ocean bottom acquisition is often not possible because of cost considerations. Assuming adequate source sampling is available, which is achievable by virtue of reciprocity and the use of modern randomized (sim ultaneous-source) marine acquisition technology, we are in a position to train convolutional neural networks (CNNs) to bring the receiver sampling to the same spatial grid as the dense source sampling. To accomplish this task, we form training pairs consisting of densely sampled data and artificially subsampled data using a reciprocity argument and the assumption that the source-site sampling is dense. While this approach has successfully been used on the recovery monochromatic frequency slices, its application in practice calls for wavefield reconstruction of time-domain data. Despite having the option to parallelize, the overall costs of this approach can become prohibitive if we decide to carry out the training and recovery independently for each frequency. Because different frequency slices share information, we propose the use the method of transfer training to make our approach computationally more efficient by warm starting the training with CNN weights obtained from a neighboring frequency slices. If the two neighboring frequency slices share information, we would expect the training to improve and converge faster. Our aim is to prove this principle by carrying a series of carefully selected experiments on a relatively large-scale five-dimensional data synthetic data volume associated with wide-azimuth 3D ocean bottom node acquisition. From these experiments, we observe that by transfer training we are able t significantly speedup in the training, specially at relatively higher frequencies where consecutive frequency slices are more correlated.
We present three imaging modalities that live on the crossroads of seismic and medical imaging. Through the lens of extended source imaging, we can draw deep connections among the fields of wave-equation based seismic and medical imaging, despite fir st appearances. From the seismic perspective, we underline the importance to work with the correct physics and spatially varying velocity fields. Medical imaging, on the other hand, opens the possibility for new imaging modalities where outside stimuli, such as laser or radar pulses, can not only be used to identify endogenous optical or thermal contrasts but that these sources can also be used to insonify the medium so that images of the whole specimen can in principle be created.
Full waveform inversion (FWI) delivers high-resolution images of the subsurface by minimizing iteratively the misfit between the recorded and calculated seismic data. It has been attacked successfully with the Gauss-Newton method and sparsity promoti ng regularization based on fixed multiscale transforms that permit significant subsampling of the seismic data when the model perturbation at each FWI data-fitting iteration can be represented with sparse coefficients. Rather than using analytical transforms with predefined dictionaries to achieve sparse representation, we introduce an adaptive transform called the Sparse Orthonormal Transform (SOT) whose dictionary is learned from many small training patches taken from the model perturbations in previous iterations. The patch-based dictionary is constrained to be orthonormal and trained with an online approach to provide the best sparse representation of the complex features and variations of the entire model perturbation. The complexity of the training method is proportional to the cube of the number of samples in one small patch. By incorporating both compressive subsampling and the adaptive SOT-based representation into the Gauss-Newton least-squares problem for each FWI iteration, the model perturbation can be recovered after an l1-norm sparsity constraint is applied on the SOT coefficients. Numerical experiments on synthetic models demonstrate that the SOT-based sparsity promoting regularization can provide robust FWI results with reduced computation.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا