ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantitative model validation techniques: new insights

160   0   0.0 ( 0 )
 نشر من قبل You Ling
 تاريخ النشر 2012
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing, the reliability-based method, and the area metric-based method can account for the existence of directional bias, where the mean predictions of a numerical model may be consistently below or above the corresponding experimental observations. It is also found that under some specific conditions, the Bayes factor metric in Bayesian equality hypothesis testing and the reliability-based metric can both be mathematically related to the p-value metric in classical hypothesis testing. Numerical studies are conducted to apply the above validation methods to gas damping prediction for radio frequency (RF) microelectromechanical system (MEMS) switches. The model of interest is a general polynomial chaos (gPC) surrogate model constructed based on expensive runs of a physics-based simulation model, and validation data are collected from fully characterized experiments.



قيم البحث

اقرأ أيضاً

We present a new high-resolution global renewable energy atlas ({REatlas}) that can be used to calculate customised hourly time series of wind and solar PV power generation. In this paper, the atlas is applied to produce 32-year-long hourly model win d power time series for Denmark for each historical and future year between 1980 and 2035. These are calibrated and validated against real production data from the period 2000 to 2010. The high number of years allows us to discuss how the characteristics of Danish wind power generation varies between individual weather years. As an example, the annual energy production is found to vary by $pm10%$ from the average. Furthermore, we show how the production pattern change as small onshore turbines are gradually replaced by large onshore and offshore turbines. Finally, we compare our wind power time series for 2020 to corresponding data from a handful of Danish energy system models. The aim is to illustrate how current differences in model wind may result in significant differences in technical and economical model predictions. These include up to $15%$ differences in installed capacity and $40%$ differences in system reserve requirements.
217 - Fa Wang , Li Li , Xuexiang Jin 2008
Different from previous models based on scatter theory and random matrix theory, a new interpretation of the observed log-normal type time-headway distribution of vehicles is presented in this paper. Inspired by the well known Galton Board, this mode l views drivers velocity adjusting process similar to the dynamics of a particle falling down a board and being deviated at decision points. A new car-following model based on this idea is proposed to reproduce the observed traffic flow phenomena. The agreement between the empirical observations and the simulation results suggests the soundness of this new approach.
This paper develops a Bayesian network-based method for the calibration of multi-physics models, integrating various sources of uncertainty with information from computational models and experimental data. We adopt the Kennedy and OHagan (KOH) framew ork for model calibration under uncertainty, and develop extensions to multi-physics models and various scenarios of available data. Both aleatoric uncertainty (due to natural variability) and epistemic uncertainty (due to lack of information, including data uncertainty and model uncertainty) are accounted for in the calibration process. Challenging aspects of Bayesian calibration for multi-physics models are investigated, including: (1) calibration with different forms of experimental data (e.g., interval data and time series data), (2) determination of the identifiability of model parameters when the analytical expression of model is known or unknown, (3) calibration of multiple physics models sharing common parameters, which enables efficient use of data especially when the experimental resources are limited. A first-order Taylor series expansion-based method is proposed to determine which model parameters are identifiable. Following the KOH framework, a probabilistic discrepancy function is estimated and added to the prediction of the calibrated model, attempting to account for model uncertainty. This discrepancy function is modeled as a Gaussian process when sufficient data are available for multiple model input combinations, and is modeled as a random variable when the available data are limited. The overall approach is illustrated using two application examples related to microelectromechanical system (MEMS) devices: (1) calibration of a dielectric charging model with time-series data, and (2) calibration of two physics models (pull-in voltage and creep) using measurements of different physical quantities in different devices.
We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do n ot depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.
A reliable and user-friendly characterisation of nano-objects in a target material is presented here in the form of a software data analysis package for interpreting small-angle X-ray scattering (SAXS) patterns. When provided with data on absolute sc ale with reasonable uncertainty estimates, the software outputs (size) distributions in absolute volume fractions complete with uncertainty estimates and minimum evidence limits, and outputs all distribution modes of a user definable range of one or more model parameters. A multitude of models are included, including prolate and oblate nanoparticles, core-shell objects, polymer models (Gaussian chain and Kholodenko worm) and a model for densely packed spheres (using the LMA-PY approximations). The McSAS software can furthermore be integrated as part of an automated reduction and analysis procedure in laboratory instruments or at synchrotron beamlines.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا