Do you want to publish a course? Click here

A machine learning framework for computationally expensive transient models

85   0   0.0 ( 0 )
 Added by Prashant Kumar
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

The promise of machine learning has been explored in a variety of scientific disciplines in the last few years, however, its application on first-principles based computationally expensive tools is still in nascent stage. Even with the advances in computational resources and power, transient simulations of large-scale dynamic systems using a variety of the first-principles based computational tools are still limited. In this work, we propose an ensemble approach where we combine one such computationally expensive tool, called discrete element method (DEM), with a time-series forecasting method called auto-regressive integrated moving average (ARIMA) and machine-learning methods to significantly reduce the computational burden while retaining model accuracy and performance. The developed machine-learning model shows good predictability and agreement with the literature, demonstrating its tremendous potential in scientific computing.



rate research

Read More

An emulator is a fast-to-evaluate statistical approximation of a detailed mathematical model (simulator). When used in lieu of simulators, emulators can expedite tasks that require many repeated evaluations, such as sensitivity analyses, policy optimization, model calibration, and value-of-information analyses. Emulators are developed using the output of simulators at specific input values (design points). Developing an emulator that closely approximates the simulator can require many design points, which becomes computationally expensive. We describe a self-terminating active learning algorithm to efficiently develop emulators tailored to a specific emulation task, and compare it with algorithms that optimize geometric criteria (random latin hypercube sampling and maximum projection designs) and other active learning algorithms (treed Gaussian Processes that optimize typical active learning criteria). We compared the algorithms root mean square error (RMSE) and maximum absolute deviation from the simulator (MAX) for seven benchmark functions and in a prostate cancer screening model. In the empirical analyses, in simulators with greatly-varying smoothness over the input domain, active learning algorithms resulted in emulators with smaller RMSE and MAX for the same number of design points. In all other cases, all algorithms performed comparably. The proposed algorithm attained satisfactory performance in all analyses, had smaller variability than the treed Gaussian Processes (it is deterministic), and, on average, had similar or better performance as the treed Gaussian Processes in 6 out of 7 benchmark functions and in the prostate cancer model.
Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic dynamical systems. In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data-driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems.
As a nonlocal extension of continuum mechanics, peridynamics has been widely and effectively applied in different fields where discontinuities in the field variables arise from an initially continuous body. An important component of the constitutive model in peridynamics is the influence function which weights the contribution of all the interactions over a nonlocal region surrounding a point of interest. Recent work has shown that in solid mechanics the influence function has a strong relationship with the heterogeneity of a materials micro-structure. However, determining an accurate influence function analytically from a given micro-structure typically requires lengthy derivations and complex mathematical models. To avoid these complexities, the goal of this paper is to develop a data-driven regression algorithm to find the optimal bond-based peridynamic model to describe the macro-scale deformation of linear elastic medium with periodic heterogeneity. We generate macro-scale deformation training data by averaging over periodic micro-structure unit cells and add a physical energy constraint representing the homogenized elastic modulus of the micro-structure to the regression algorithm. We demonstrate this scheme for examples of one- and two-dimensional linear elastodynamics and show that the energy constraint improves the accuracy of the resulting peridynamic model.
In statistical data assimilation (SDA) and supervised machine learning (ML), we wish to transfer information from observations to a model of the processes underlying those observations. For SDA, the model consists of a set of differential equations that describe the dynamics of a physical system. For ML, the model is usually constructed using other strategies. In this paper, we develop a systematic formulation based on Monte Carlo sampling to achieve such information transfer. Following the derivation of an appropriate target distribution, we present the formulation based on the standard Metropolis-Hasting (MH) procedure and the Hamiltonian Monte Carlo (HMC) method for performing the high dimensional integrals that appear. To the extensive literature on MH and HMC, we add (1) an annealing method using a hyperparameter that governs the precision of the model to identify and explore the highest probability regions of phase space dominating those integrals, and (2) a strategy for initializing the state space search. The efficacy of the proposed formulation is demonstrated using a nonlinear dynamical model with chaotic solutions widely used in geophysics.
167 - Alexander Glazov 2017
A method for correcting for detector smearing effects using machine learning techniques is presented. Compared to the standard approaches the method can use more than one reconstructed variable to infere the value of the unsmeared quantity on event by event basis. The method is implemented using a sequential neural network with a categorical cross entropy as the loss function. It is tested on a toy example and is shown to satisfy basic closure tests. Possible application of the method for analysis of the data from high energy physics experiments is discussed.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا