ترغب بنشر مسار تعليمي؟ اضغط هنا

Combining checkpointing and data compression for large scale seismic inversion

85   0   0.0 ( 0 )
 نشر من قبل Navjot Kukreja
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Seismic inversion and imaging are adjoint-based optimization problems that processes up to terabytes of data, regularly exceeding the memory capacity of available computers. Data compression is an effective strategy to reduce this memory requirement by a certain factor, particularly if some loss in accuracy is acceptable. A popular alternative is checkpointing, where data is stored at selected points in time, and values at other times are recomputed as needed from the last stored state. This allows arbitrarily large adjoint computations with limited memory, at the cost of additional recomputations. In this paper we combine compression and checkpointing for the first time to compute a realistic seismic inversion. The combination of checkpointing and compression allows larger adjoint computations compared to using only compression, and reduces the recomputation overhead significantly compared to using only checkpointing.



قيم البحث

اقرأ أيضاً

Most of the seismic inversion techniques currently proposed focus on robustness with respect to the background model choice or inaccurate physical modeling assumptions, but are not apt to large-scale 3D applications. On the other hand, methods that a re computationally feasible for industrial problems, such as full waveform inversion, are notoriously bogged down by local minima and require adequate starting models. We propose a novel solution that is both scalable and less sensitive to starting model or inaccurate physics when compared to full waveform inversion. The method is based on a dual (Lagrangian) reformulation of the classical wavefield reconstruction inversion, whose robustness with respect to local minima is well documented in the literature. However, it is not suited to 3D, as it leverages expensive frequency-domain solvers for the wave equation. The proposed reformulation allows the deployment of state-of-the-art time-domain finite-difference methods, and is computationally mature for industrial scale problems.
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 var iables per grid point for 128 time steps yields 8~TB of data, assuming double precision. By viewing the data as a dense five-way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 5000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed-memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.
61 - Shucai Li , Bin Liu , Yuxiao Ren 2019
We propose a new method to tackle the mapping challenge from time-series data to spatial image in the field of seismic exploration, i.e., reconstructing the velocity model directly from seismic data by deep neural networks (DNNs). The conventional wa y of addressing this ill-posed inversion problem is through iterative algorithms, which suffer from poor nonlinear mapping and strong nonuniqueness. Other attempts may either import human intervention errors or underuse seismic data. The challenge for DNNs mainly lies in the weak spatial correspondence, the uncertain reflection-reception relationship between seismic data and velocity model, as well as the time-varying property of seismic data. To tackle these challenges, we propose end-to-end seismic inversion networks (SeisInvNets) with novel components to make the best use of all seismic data. Specifically, we start with every seismic trace and enhance it with its neighborhood information, its observation setup, and the global context of its corresponding seismic profile. From the enhanced seismic traces, the spatially aligned feature maps can be learned and further concatenated to reconstruct a velocity model. In general, we let every seismic trace contribute to the reconstruction of the whole velocity model by finding spatial correspondence. The proposed SeisInvNet consistently produces improvements over the baselines and achieves promising performance on our synthesized and proposed SeisInv data set according to various evaluation metrics. The inversion results are more consistent with the target from the aspects of velocity values, subsurface structures, and geological interfaces. Moreover, the mechanism and the generalization of the proposed method are discussed and verified. Nevertheless, the generalization of deep-learning-based inversion methods on real data is still challenging and considering physics may be one potential solution.
146 - Anh Tran , Tim Wildey 2020
Determining process-structure-property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process-structure and structure-property linkages. In this work, we seek to l earn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model (CPFEM) matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification (UQ) framework based on push-forward probability measures, which combines techniques from measure theory and Bayes rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning (ML) Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure-property linkages.
Seismic full-waveform inversion (FWI), which uses iterative methods to estimate high-resolution subsurface models from seismograms, is a powerful imaging technique in exploration geophysics. In recent years, the computational cost of FWI has grown ex ponentially due to the increasing size and resolution of seismic data. Moreover, it is a non-convex problem and can encounter local minima due to the limited accuracy of the initial velocity models or the absence of low frequencies in the measurements. To overcome these computational issues, we develop a multiscale data-driven FWI method based on fully convolutional networks (FCN). In preparing the training data, we first develop a real-time style transform method to create a large set of synthetic subsurface velocity models from natural images. We then develop two convolutional neural networks with encoder-decoder structure to reconstruct the low- and high-frequency components of the subsurface velocity models, separately. To validate the performance of our data-driven inversion method and the effectiveness of the synthesized training set, we compare it with conventional physics-based waveform inversion approaches using both synthetic and field data. These numerical results demonstrate that, once our model is fully trained, it can significantly reduce the computation time, and yield more accurate subsurface velocity models in comparison with conventional FWI.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا