ترغب بنشر مسار تعليمي؟ اضغط هنا

Goal-based sensitivity maps using time windows and ensemble perturbations

250   0   0.0 ( 0 )
 نشر من قبل Claire Heaney
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

We present an approach for forming sensitivity maps (or sensitivites) using ensembles. The method is an alternative to using an adjoint, which can be very challenging to formulate and also computationally expensive to solve. The main novelties of the presented approach are: 1) the use of goals, weighting the perturbation to help resolve the most important sensitivities, 2) the use of time windows, which enable the perturbations to be optimised independently for each window and 3) re-orthogonalisation of the solution through time, which helps optimise each perturbation when calculating sensitivity maps. These novel methods greatly reduce the number of ensembles required to form the sensitivity maps as demonstrated in this paper. As the presented method relies solely on ensembles obtained from the forward model, it can therefore be applied directly to forward models of arbitrary complexity arising from, for example, multi-physics coupling, legacy codes or model chains. It can also be applied to compute sensitivities for optimisation of sensor placement, optimisation for design or control, goal-based mesh adaptivity, assessment of goals (e.g. hazard assessment and mitigation in the natural environment), determining the worth of current data and data assimilation. We analyse and demonstrate the efficiency of the approach by applying the method to advection problems and also a non-linear heterogeneous multi-phase porous media problem, showing, in all cases, that the number of ensembles required to obtain accurate sensitivity maps is relatively low, in the order of 10s.



قيم البحث

اقرأ أيضاً

We give a generalization of Thurstons Bounded Image Theorem for skinning maps, which applies to pared 3-manifolds with incompressible boundary that are not necessarily acylindrical. Along the way we study properties of divergent sequences in the defo rmation space of such a manifold, establishing the existence of compact cores satisfying a certain notion of uniform geometry.
89 - Anh Tran , Jing Sun , Dehao Liu 2020
Uncertainty involved in computational materials modeling needs to be quantified to enhance the credibility of predictions. Tracking the propagation of model-form and parameter uncertainty for each simulation step, however, is computationally expensiv e. In this paper, a multiscale stochastic reduced-order model (ROM) is proposed to propagate the uncertainty as a stochastic process with Gaussian noise. The quantity of interest (QoI) is modeled by a non-linear Langevin equation, where its associated probability density function is propagated using Fokker-Planck equation. The drift and diffusion coefficients of the Fokker-Planck equation are trained and tested from the time-series dataset obtained from direct numerical simulations. Considering microstructure descriptors in the microstructure evolution as QoIs, we demonstrate our proposed methodology in three integrated computational materials engineering (ICME) models: kinetic Monte Carlo, phase field, and molecular dynamics simulations. It is demonstrated that once calibrated correctly using the available time-series datasets from these ICME models, the proposed ROM is capable of propagating the microstructure descriptors dynamically, and the results agree well with the ICME models.
We consider two generalizations of the classical weighted paging problem that incorporate the notion of delayed service of page requests. The first is the (weighted) Paging with Time Windows (PageTW) problem, which is like the classical weighted pagi ng problem except that each page request only needs to be served before a given deadline. This problem arises in many practical applications of online caching, such as the deadline I/O scheduler in the Linux kernel and video-on-demand streaming. The second, and more general, problem is the (weighted) Paging with Delay (PageD) problem, where the delay in serving a page request results in a penalty being assessed to the objective. This problem generalizes the caching problem to allow delayed service, a line of work that has recently gained traction in online algorithms (e.g., Emek et al. STOC 16, Azar et al. STOC 17, Azar and Touitou FOCS 19). We give $O(log klog n)$-competitive algorithms for both the PageTW and PageD problems on $n$ pages with a cache of size $k$. This significantly improves on the previous best bounds of $O(k)$ for both problems (Azar et al. STOC 17). We also consider the offline PageTW and PageD problems, for which we give $O(1)$ approximation algorithms and prove APX-hardness. These are the first results for the offline problems; even NP-hardness was not known before our work. At the heart of our algorithms is a novel hitting-set LP relaxation of the PageTW problem that overcomes the $Omega(k)$ integrality gap of the natural LP for the problem. To the best of our knowledge, this is the first example of an LP-based algorithm for an online algorithm with delays/deadlines.
The local density of states (LDOS) is a distribution that characterizes the effect of perturbations on quantum systems. Recently, it was proposed a semiclassical theory for the LDOS of chaotic billiards and maps. This theory predicts that the LDOS is a Breit-Wigner distribution independent of the perturbation strength and also gives a semiclassical expression for the LDOS witdth. Here, we test the validity of such an approximation in quantum maps varying the degree of chaoticity, the region in phase space where the perturbation is applying and the intensity of the perturbation. We show that for highly chaotic maps or strong perturbations the semiclassical theory of the LDOS is accurate to describe the quantum distribution. Moreover, the width of the LDOS is also well represented for its semiclassical expression in the case of mixed classical dynamics.
This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range of methods available for predicting solvation free energies, ranging from empirical statist ical models to ab initio quantum mechanical approaches. Each of these methods is based on a set of conceptual assumptions that can affect predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the fourth Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improved. We show that this process results in a final aggregate estimate that outperforms all individual methods by reducing estimate errors by as much as 91% to 1.2 kcal/mol accuracy. We also compare our iterative refinement approach to other statistical ensemble approaches and demonstrate that this iterative process reduces estimate errors by as much as 61%. This work provides a new approach for accurate solvation free energy prediction and lays the foundation for future work on aggregate models that can balance computational cost with prediction accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا