ﻻ يوجد ملخص باللغة العربية
The linear micro-instabilities driving turbulent transport in magnetized fusion plasmas (as well as the respective nonlinear saturation mechanisms) are known to be sensitive with respect to various physical parameters characterizing the background plasma and the magnetic equilibrium. Therefore, uncertainty quantification is essential for achieving predictive numerical simulations of plasma turbulence. However, the high computational costs of the required gyrokinetic simulations and the large number of parameters render standard Monte Carlo techniques intractable. To address this problem, we propose a multi-fidelity Monte Carlo approach in which we employ data-driven low-fidelity models that exploit the structure of the underlying problem such as low intrinsic dimension and anisotropic coupling of the stochastic inputs. The low-fidelity models are efficiently constructed via sensitivity-driven dimension-adaptive sparse grid interpolation using both the full set of uncertain inputs and subsets comprising only selected, important parameters. We illustrate the power of this method by applying it to two plasma turbulence problems with up to $14$ stochastic parameters, demonstrating that it is up to four orders of magnitude more efficient than standard Monte Carlo methods measured in single-core performance, which translates into a runtime reduction from around eight days to one hour on 240 cores on parallel machines.
Decision making for dynamic systems is challenging due to the scale and dynamicity of such systems, and it is comprised of decisions at strategic, tactical, and operational levels. One of the most important aspects of decision making is incorporating
Numerical models are increasingly used for non-invasive diagnosis and treatment planning in coronary artery disease, where service-based technologies have proven successful in identifying hemodynamically significant and hence potentially dangerous va
Conventional frequentist learning, as assumed by existing federated learning protocols, is limited in its ability to quantify uncertainty, incorporate prior knowledge, guide active learning, and enable continual learning. Bayesian learning provides a
Neural networks (NNs) are often used as surrogates or emulators of partial differential equations (PDEs) that describe the dynamics of complex systems. A virtually negligible computational cost of such surrogates renders them an attractive tool for e
One of the major challenges for low-rank multi-fidelity (MF) approaches is the assumption that low-fidelity (LF) and high-fidelity (HF) models admit similar low-rank kernel representations. Low-rank MF methods have traditionally attempted to exploit