Do you want to publish a course? Click here

Physics-Informed Machine Learning Models for Predicting the Progress of Reactive-Mixing

64   0   0.0 ( 0 )
 Added by Maruti Mudunuru
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This paper presents a physics-informed machine learning (ML) framework to construct reduced-order models (ROMs) for reactive-transport quantities of interest (QoIs) based on high-fidelity numerical simulations. QoIs include species decay, product yield, and degree of mixing. The ROMs for QoIs are applied to quantify and understand how the chemical species evolve over time. First, high-resolution datasets for constructing ROMs are generated by solving anisotropic reaction-diffusion equations using a non-negative finite element formulation for different input parameters. Non-negative finite element formulation ensures that the species concentration is non-negative (which is needed for computing QoIs) on coarse computational grids even under high anisotropy. The reactive-mixing model input parameters are a time-scale associated with flipping of velocity, a spatial-scale controlling small/large vortex structures of velocity, a perturbation parameter of the vortex-based velocity, anisotropic dispersion strength/contrast, and molecular diffusion. Second, random forests, F-test, and mutual information criterion are used to evaluate the importance of model inputs/features with respect to QoIs. Third, Support Vector Machines (SVM) and Support Vector Regression (SVR) are used to construct ROMs based on the model inputs. Then, SVR-ROMs are used to predict scaling of QoIs. Qualitatively, SVR-ROMs are able to describe the trends observed in the scaling law associated with QoIs. Fourth, the scaling laws exponent dependence on model inputs/features are evaluated using $k$-means clustering. Finally, in terms of the computational cost, the proposed SVM-ROMs and SVR-ROMs are $mathcal{O}(10^7)$ times faster than running a high-fidelity numerical simulation for evaluating QoIs.



rate research

Read More

Accurate predictions of reactive mixing are critical for many Earth and environmental science problems. To investigate mixing dynamics over time under different scenarios, a high-fidelity, finite-element-based numerical model is built to solve the fast, irreversible bimolecular reaction-diffusion equations to simulate a range of reactive-mixing scenarios. A total of 2,315 simulations are performed using different sets of model input parameters comprising various spatial scales of vortex structures in the velocity field, time-scales associated with velocity oscillations, the perturbation parameter for the vortex-based velocity, anisotropic dispersion contrast, and molecular diffusion. Outputs comprise concentration profiles of the reactants and products. The inputs and outputs of these simulations are concatenated into feature and label matrices, respectively, to train 20 different machine learning (ML) emulators to approximate system behavior. The 20 ML emulators based on linear methods, Bayesian methods, ensemble learning methods, and multilayer perceptron (MLP), are compared to assess these models. The ML emulators are specifically trained to classify the state of mixing and predict three quantities of interest (QoIs) characterizing species production, decay, and degree of mixing. Linear classifiers and regressors fail to reproduce the QoIs; however, ensemble methods (classifiers and regressors) and the MLP accurately classify the state of reactive mixing and the QoIs. Among ensemble methods, random forest and decision-tree-based AdaBoost faithfully predict the QoIs. At run time, trained ML emulators are $approx10^5$ times faster than the high-fidelity numerical simulations. Speed and accuracy of the ensemble and MLP models facilitate uncertainty quantification, which usually requires 1,000s of model run, to estimate the uncertainty bounds on the QoIs.
In this work we present a new physics-informed machine learning model that can be used to analyze kinematic data from an instrumented mouthguard and detect impacts to the head. Monitoring player impacts is vitally important to understanding and protecting from injuries like concussion. Typically, to analyze this data, a combination of video analysis and sensor data is used to ascertain the recorded events are true impacts and not false positives. In fact, due to the nature of using wearable devices in sports, false positives vastly outnumber the true positives. Yet, manual video analysis is time-consuming. This imbalance leads traditional machine learning approaches to exhibit poor performance in both detecting true positives and preventing false negatives. Here, we show that by simulating head impacts numerically using a standard Finite Element head-neck model, a large dataset of synthetic impacts can be created to augment the gathered, verified, impact data from mouthguards. This combined physics-informed machine learning impact detector reported improved performance on test datasets compared to traditional impact detectors with negative predictive value and positive predictive values of 88% and 87% respectively. Consequently, this model reported the best results to date for an impact detection algorithm for American Football, achieving an F1 score of 0.95. In addition, this physics-informed machine learning impact detector was able to accurately detect true and false impacts from a test dataset at a rate of 90% and 100% relative to a purely manual video analysis workflow. Saving over 12 hours of manual video analysis for a modest dataset, at an overall accuracy of 92%, these results indicate that this model could be used in place of, or alongside, traditional video analysis to allow for larger scale and more efficient impact detection in sports such as American Football.
Analysis of reactive-diffusion simulations requires a large number of independent model runs. For each high-fidelity simulation, inputs are varied and the predicted mixing behavior is represented by changes in species concentration. It is then required to discern how the model inputs impact the mixing process. This task is challenging and typically involves interpretation of large model outputs. However, the task can be automated and substantially simplified by applying Machine Learning (ML) methods. In this paper, we present an application of an unsupervised ML method (called NTFk) using Non-negative Tensor Factorization (NTF) coupled with a custom clustering procedure based on k-means to reveal hidden features in product concentration. An attractive aspect of the proposed ML method is that it ensures the extracted features are non-negative, which are important to obtain a meaningful deconstruction of the mixing processes. The ML method is applied to a large set of high-resolution FEM simulations representing reaction-diffusion processes in perturbed vortex-based velocity fields. The applied FEM ensures that species concentration are always non-negative. The simulated reaction is a fast irreversible bimolecular reaction. The reactive-diffusion model input parameters that control mixing include properties of velocity field, anisotropic dispersion, and molecular diffusion. We demonstrate the applicability of the ML method to produce a meaningful deconstruction of model outputs to discriminate between different physical processes impacting the reactants, their mixing, and the spatial distribution of the product. The presented ML analysis allowed us to identify additive features that characterize mixing behavior.
66 - Sen Liu 2020
Machine learning (ML) is shown to predict new alloys and their performances in a high dimensional, multiple-target-property design space that considers chemistry, multi-step processing routes, and characterization methodology variations. A physics-informed featured engineering approach is shown to enable otherwise poorly performing ML models to perform well with the same data. Specifically, previously engineered elemental features based on alloy chemistries are combined with newly engineered heat treatment process features. The new features result from first transforming the heat treatment parameter data as it was previously recorded using nonlinear mathematical relationships known to describe the thermodynamics and kinetics of phase transformations in alloys. The ability of the ML model to be used for predictive design is validated using blind predictions. Composition - process - property relationships for thermal hysteresis of shape memory alloys (SMAs) with complex microstructures created via multiple melting-homogenization-solutionization-precipitation processing stage variations are captured, in addition to the mean transformation temperatures of the SMAs. The quantitative models of hysteresis exhibited by such highly processed alloys demonstrate the ability for ML models to design for physical complexities that have challenged physics-based modeling approaches for decades.
In this paper, five different approaches for reduced-order modeling of brittle fracture in geomaterials, specifically concrete, are presented and compared. Four of the five methods rely on machine learning (ML) algorithms to approximate important aspects of the brittle fracture problem. In addition to the ML algorithms, each method incorporates different physics-based assumptions in order to reduce the computational complexity while maintaining the physics as much as possible. This work specifically focuses on using the ML approaches to model a 2D concrete sample under low strain rate pure tensile loading conditions with 20 preexisting cracks present. A high-fidelity finite element-discrete element model is used to both produce a training dataset of 150 simulations and an additional 35 simulations for validation. Results from the ML approaches are directly compared against the results from the high-fidelity model. Strengths and weaknesses of each approach are discussed and the most important conclusion is that a combination of physics-informed and data-driven features are necessary for emulating the physics of crack propagation, interaction and coalescence. All of the models presented here have runtimes that are orders of magnitude faster than the original high-fidelity model and pave the path for developing accurate reduced order models that could be used to inform larger length-scale models with important sub-scale physics that often cannot be accounted for due to computational cost.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا