ترغب بنشر مسار تعليمي؟ اضغط هنا

VECMAtk: A Scalable Verification, Validation and Uncertainty Quantification Toolkit for Scientific Simulations

70   0   0.0 ( 0 )
 نشر من قبل Peter Coveney
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present the VECMA toolkit (VECMAtk), a flexible software environment for single and multiscale simulations that introduces directly applicable and reusable procedures for verification, validation (V&V), sensitivity analysis (SA) and uncertainty quantification (UQ). It enables users to verify key aspects of their applications, systematically compare and validate the simulation outputs against observational or benchmark data, and run simulations conveniently on any platform from the desktop to current multi-petascale computers. In this sequel to our paper on VECMAtk which we presented last year, we focus on a range of functional and performance improvements that we have introduced, cover newly introduced components, and applications examples from seven different domains such as conflict modelling and environmental sciences. We also present several implemented patterns for UQ/SA and V&V, and guide the reader through one example concerning COVID-19 modelling in detail.

قيم البحث

اقرأ أيضاً

In this paper, we describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models. The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQs connections to other pillars of trustworthy AI such as fairness and transparency through the dissemination of latest research and education materials. Beyond the Python package (url{https://github.com/IBM/UQ360}), we have developed an interactive experience (url{http://uq360.mybluemix.net}) and guidance materials as educational tools to aid researchers and developers in producing and communicating high-quality uncertainties in an effective manner.
Quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repe ated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in an embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).
We present Korali, an open-source framework for large-scale Bayesian uncertainty quantification and stochastic optimization. The framework relies on non-intrusive sampling of complex multiphysics models and enables their exploitation for optimization and decision-making. In addition, its distributed sampling engine makes efficient use of massively-parallel architectures while introducing novel fault tolerance and load balancing mechanisms. We demonstrate these features by interfacing Korali with existing high-performance software such as Aphros, Lammps (CPU-based), and Mirheo (GPU-based) and show efficient scaling for up to 512 nodes of the CSCS Piz Daint supercomputer. Finally, we present benchmarks demonstrating that Korali outperforms related state-of-the-art software frameworks.
Literate computing has emerged as an important tool for computational studies and open science, with growing folklore of best practices. In this work, we report two case studies - one in computational magnetism and another in computational mathematic s - where domain-specific software was exposed to the Jupyter environment. This enables high-level control of simulations and computation, interactive exploration of computational results, batch processing on HPC resources, and reproducible workflow documentation in Jupyter notebooks. In the first study, Ubermag drives existing computational micromagnetics software through a domain-specific language embedded in Python. In the second study, a dedicated Jupyter kernel interfaces with the GAP system for computational discrete algebra and its dedicated programming language. In light of these case studies, we discuss the benefits of this approach, including progress toward more reproducible and reusable research results and outputs, notably through the use of infrastructure such as JupyterHub and Binder.
74 - Ziyu Xie , Farah Alsafadi , Xu Wu 2021
The Best Estimate plus Uncertainty (BEPU) approach for nuclear systems modeling and simulation requires that the prediction uncertainty must be quantified in order to prove that the investigated design stays within acceptance criteria. A rigorous Unc ertainty Quantification (UQ) process should simultaneously consider multiple sources of quantifiable uncertainties: (1) parameter uncertainty due to randomness or lack of knowledge; (2) experimental uncertainty due to measurement noise; (3) model uncertainty caused by missing/incomplete physics and numerical approximation errors, and (4) code uncertainty when surrogate models are used. In this paper, we propose a comprehensive framework to integrate results from inverse UQ and quantitative validation to provide robust predictions so that all these sources of uncertainties can be taken into consideration. Inverse UQ quantifies the parameter uncertainties based on experimental data while taking into account uncertainties from model, code and measurement. In the validation step, we use a quantitative validation metric based on Bayesian hypothesis testing. The resulting metric, called the Bayes factor, is then used to form weighting factors to combine the prior and posterior knowledge of the parameter uncertainties in a Bayesian model averaging process. In this way, model predictions will be able to integrate the results from inverse UQ and validation to account for all available sources of uncertainties. This framework is a step towards addressing the ANS Nuclear Grand Challenge on Simulation/Experimentation by bridging the gap between models and data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا