Do you want to publish a course? Click here

Towards fast machine-learning-assisted Bayesian posterior inference of realistic microseismic events

272   0   0.0 ( 0 )
 Added by Davide Piras
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Bayesian inference applied to microseismic activity monitoring allows for principled estimation of the coordinates of microseismic events from recorded seismograms, and their associated uncertainties. However, forward modelling of these microseismic events, necessary to perform Bayesian source inversion, can be prohibitively expensive in terms of computational resources. A viable solution is to train a surrogate model based on machine learning techniques, to emulate the forward model and thus accelerate Bayesian inference. In this paper, we improve on previous work, which considered only sources with isotropic moment tensor. We train a machine learning algorithm on the power spectrum of the recorded pressure wave and show that the trained emulator allows for the complete and fast retrieval of the event coordinates for $textit{any}$ source mechanism. Moreover, we show that our approach is computationally inexpensive, as it can be run in less than 1 hour on a commercial laptop, while yielding accurate results using less than $10^4$ training seismograms. We additionally demonstrate how the trained emulators can be used to identify the source mechanism through the estimation of the Bayesian evidence. This work lays the foundations for the efficient localisation and characterisation of any recorded seismogram, thus helping to quantify human impact on seismic activity and mitigate seismic hazard.

rate research

Read More

Passive microseismic data are commonly buried in noise, which presents a significant challenge for signal detection and recovery. For recordings from a surface sensor array where each trace contains a time-delayed arrival from the event, we propose an autocorrelation-based stacking method that designs a denoising filter from all the traces, as well as a multi-channel detection scheme. This approach circumvents the issue of time aligning the traces prior to stacking because every traces autocorrelation is centered at zero in the lag domain. The effect of white noise is concentrated near zero lag, so the filter design requires a predictable adjustment of the zero-lag value. Truncation of the autocorrelation is employed to smooth the impulse response of the denoising filter. In order to extend the applicability of the algorithm, we also propose a noise prewhitening scheme that addresses cases with colored noise. The simplicity and robustness of this method are validated with synthetic and real seismic traces.
Continuous microseismic monitoring of hydraulic fracturing is commonly used in many engineering, environmental, mining, and petroleum applications. Microseismic signals recorded at the surface, suffer from excessive noise that complicates first-break picking and subsequent data processing and analysis. This study presents a new first-break picking algorithm that employs concepts from seismic interferometry and time-frequency (TF) analysis. The algorithm first uses a TF plot to manually pick a reference first-break and then iterates the steps of cross-correlation, alignment, and stacking to enhance the signal-to-noise ratio of the relative first breaks. The reference first-break is subsequently used to calculate final first breaks from the relative ones. Testing on synthetic and real data sets at high levels of additive noise shows that the algorithm enhances the first-break picking considerably. Furthermore, results show that only two iterations are needed to converge to the true first breaks. Indeed, iterating more can have detrimental effects on the algorithm due to increasing correlation of random noise.
314 - T. Djarv , A. Ekstrom , C. Forssen 2021
We make ab initio predictions for the A = 6 nuclear level scheme based on two- and three-nucleon interactions up to next-to-next-to-leading order in chiral effective field theory ($chi$EFT). We utilize eigenvector continuation and Bayesian methods to quantify uncertainties stemming from the many-body method, the $chi$EFT truncation, and the low-energy constants of the nuclear interaction. The construction and validation of emulators is made possible via the development of JupiterNCSM -- a new M-scheme no-core shell model code that uses on-the-fly Hamiltonian matrix construction for efficient, single-node computations up to $N_mathrm{max} = 10$ for ${}^{6}mathrm{Li}$. We find a slight underbinding of ${}^{6}mathrm{He}$ and ${}^{6}mathrm{Li}$, although consistent with experimental data given our theoretical error bars. As a result of incorporating a correlated $chi$EFT-truncation errors we find more precise predictions (smaller error bars) for separation energies: $S_d({}^{6}mathrm{Li}) = 0.89 pm 0.44$ MeV, $S_{2n}({}^{6}mathrm{He}) = 0.20 pm 0.60$ MeV, and for the beta decay Q-value: $Q_{beta^-}({}^{6}mathrm{He}) = 3.71 pm 0.65$ MeV. We conclude that our error bars can potentially be reduced further by extending the model space used by JupiterNCSM.
Basal motion is the primary mechanism for ice flux outside Antarctica, yet a widely applicable model for predicting it in the absence of retrospective observations remains elusive. This is due to the difficulty in both observing small-scale bed properties and predicting a time-varying water pressure on which basal motion putatively depends. We take a Bayesian approach to these problems by coupling models of ice dynamics and subglacial hydrology and conditioning on observations of surface velocity in southwestern Greenland to infer the posterior probability distributions for eight spatially and temporally constant parameters governing the behavior of both the sliding law and hydrologic model. Because the model is computationally expensive, classical MCMC sampling is intractable. We skirt this issue by training a neural network as a surrogate that approximates the model at a sliver of the computational cost. We find that surface velocity observations establish strong constraints on model parameters relative to a prior distribution and also elucidate correlations, while the model explains 60% of observed variance. However, we also find that several distinct configurations of the hydrologic system and stress regime are consistent with observations, underscoring the need for continued data collection and model development.
The coloured dissolved organic matter (CDOM) concentration is the standard measure of humic substance in natural waters. CDOM measurements by remote sensing is calculated using the absorption coefficient (a) at a certain wavelength (e.g. 440nm). This paper presents a comparison of four machine learning methods for the retrieval of CDOM from remote sensing signals: regularized linear regression (RLR), random forest (RF), kernel ridge regression (KRR) and Gaussian process regression (GPR). Results are compared with the established polynomial regression algorithms. RLR is revealed as the simplest and most efficient method, followed closely by its nonlinear counterpart KRR.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا