ترغب بنشر مسار تعليمي؟ اضغط هنا

Predictive Capability Maturity Quantification using Bayesian Network

96   0   0.0 ( 0 )
 نشر من قبل Linyu Lin
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

In nuclear engineering, modeling and simulations (M&Ss) are widely applied to support risk-informed safety analysis. Since nuclear safety analysis has important implications, a convincing validation process is needed to assess simulation adequacy, i.e., the degree to which M&S tools can adequately represent the system quantities of interest. However, due to data gaps, validation becomes a decision-making process under uncertainties. Expert knowledge and judgments are required to collect, choose, characterize, and integrate evidence toward the final adequacy decision. However, in validation frameworks CSAU: Code Scaling, Applicability, and Uncertainty (NUREG/CR-5249) and EMDAP: Evaluation Model Development and Assessment Process (RG 1.203), such a decision-making process is largely implicit and obscure. When scenarios are complex, knowledge biases and unreliable judgments can be overlooked, which could increase uncertainty in the simulation adequacy result and the corresponding risks. Therefore, a framework is required to formalize the decision-making process for simulation adequacy in a practical, transparent, and consistent manner. This paper suggests a framework Predictive Capability Maturity Quantification using Bayesian network (PCMQBN) as a quantified framework for assessing simulation adequacy based on information collected from validation activities. A case study is prepared for evaluating the adequacy of a Smoothed Particle Hydrodynamic simulation in predicting the hydrodynamic forces onto static structures during an external flooding scenario. Comparing to the qualitative and implicit adequacy assessment, PCMQBN is able to improve confidence in the simulation adequacy result and to reduce expected loss in the risk-informed safety analysis.



قيم البحث

اقرأ أيضاً

This paper develops a Bayesian network-based method for the calibration of multi-physics models, integrating various sources of uncertainty with information from computational models and experimental data. We adopt the Kennedy and OHagan (KOH) framew ork for model calibration under uncertainty, and develop extensions to multi-physics models and various scenarios of available data. Both aleatoric uncertainty (due to natural variability) and epistemic uncertainty (due to lack of information, including data uncertainty and model uncertainty) are accounted for in the calibration process. Challenging aspects of Bayesian calibration for multi-physics models are investigated, including: (1) calibration with different forms of experimental data (e.g., interval data and time series data), (2) determination of the identifiability of model parameters when the analytical expression of model is known or unknown, (3) calibration of multiple physics models sharing common parameters, which enables efficient use of data especially when the experimental resources are limited. A first-order Taylor series expansion-based method is proposed to determine which model parameters are identifiable. Following the KOH framework, a probabilistic discrepancy function is estimated and added to the prediction of the calibrated model, attempting to account for model uncertainty. This discrepancy function is modeled as a Gaussian process when sufficient data are available for multiple model input combinations, and is modeled as a random variable when the available data are limited. The overall approach is illustrated using two application examples related to microelectromechanical system (MEMS) devices: (1) calibration of a dielectric charging model with time-series data, and (2) calibration of two physics models (pull-in voltage and creep) using measurements of different physical quantities in different devices.
Using the latest numerical simulations of rotating stellar core collapse, we present a Bayesian framework to extract the physical information encoded in noisy gravitational wave signals. We fit Bayesian principal component regression models with know n and unknown signal arrival times to reconstruct gravitational wave signals, and subsequently fit known astrophysical parameters on the posterior means of the principal component coefficients using a linear model. We predict the ratio of rotational kinetic energy to gravitational energy of the inner core at bounce by sampling from the posterior predictive distribution, and find that these predictions are generally very close to the true parameter values, with $90%$ credible intervals $sim 0.04$ and $sim 0.06$ wide for the known and unknown arrival time models respectively. Two supervised machine learning methods are implemented to classify precollapse differential rotation, and we find that these methods discriminate rapidly rotating progenitors particularly well. We also introduce a constrained optimization approach to model selection to find an optimal number of principal components in the signal reconstruction step. Using this approach, we select 14 principal components as the most parsimonious model.
74 - Ziyu Xie , Farah Alsafadi , Xu Wu 2021
The Best Estimate plus Uncertainty (BEPU) approach for nuclear systems modeling and simulation requires that the prediction uncertainty must be quantified in order to prove that the investigated design stays within acceptance criteria. A rigorous Unc ertainty Quantification (UQ) process should simultaneously consider multiple sources of quantifiable uncertainties: (1) parameter uncertainty due to randomness or lack of knowledge; (2) experimental uncertainty due to measurement noise; (3) model uncertainty caused by missing/incomplete physics and numerical approximation errors, and (4) code uncertainty when surrogate models are used. In this paper, we propose a comprehensive framework to integrate results from inverse UQ and quantitative validation to provide robust predictions so that all these sources of uncertainties can be taken into consideration. Inverse UQ quantifies the parameter uncertainties based on experimental data while taking into account uncertainties from model, code and measurement. In the validation step, we use a quantitative validation metric based on Bayesian hypothesis testing. The resulting metric, called the Bayes factor, is then used to form weighting factors to combine the prior and posterior knowledge of the parameter uncertainties in a Bayesian model averaging process. In this way, model predictions will be able to integrate the results from inverse UQ and validation to account for all available sources of uncertainties. This framework is a step towards addressing the ANS Nuclear Grand Challenge on Simulation/Experimentation by bridging the gap between models and data.
Deep learning is a rapidly-evolving technology with possibility to significantly improve physics reach of collider experiments. In this study we developed a novel algorithm of vertex finding for future lepton colliders such as the International Linea r Collider. We deploy two networks; one is simple fully-connected layers to look for vertex seeds from track pairs, and the other is a customized Recurrent Neural Network with an attention mechanism and an encoder-decoder structure to associate tracks to the vertex seeds. The performance of the vertex finder is compared with the standard ILC reconstruction algorithm.
67 - Kenan v{S}ehic 2020
In offshore engineering design, nonlinear wave models are often used to propagate stochastic waves from an input boundary to the location of an offshore structure. Each wave realization is typically characterized by a high-dimensional input time seri es, and a reliable determination of the extreme events is associated with substantial computational effort. As the sea depth decreases, extreme events become more difficult to evaluate. We here construct a low-dimensional characterization of the candidate input time series to circumvent the search for extreme wave events in a high-dimensional input probability space. Each wave input is represented by a unique low-dimensional set of parameters for which standard surrogate approximations, such as Gaussian processes, can estimate the short-term exceedance probability efficiently and accurately. We demonstrate the advantages of the new approach with a simple shallow-water wave model based on the Korteweg-de Vries equation for which we can provide an accurate reference solution based on the simple Monte Carlo method. We furthermore apply the method to a fully nonlinear wave model for wave propagation over a sloping seabed. The results demonstrate that the Gaussian process can learn accurately the tail of the heavy-tailed distribution of the maximum wave crest elevation based on only $1.7%$ of the required Monte Carlo evaluations.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا