ترغب بنشر مسار تعليمي؟ اضغط هنا

Diagnostic-Driven Nonstationary Emulators Using Kernel Mixtures

55   0   0.0 ( 0 )
 نشر من قبل Victoria Volodina
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Weakly stationary Gaussian processes (GPs) are the principal tool in the statistical approaches to the design and analysis of computer experiments (or Uncertainty Quantification). Such processes are fitted to computer model output using a set of training runs to learn the parameters of the process covariance kernel. The stationarity assumption is often adequate, yet can lead to poor predictive performance when the model response exhibits nonstationarity, for example, if its smoothness varies across the input space. In this paper, we introduce a diagnostic-led approach to fitting nonstationary GP emulators by specifying finite mixtures of region-specific covariance kernels. Our method first fits a stationary GP and, if traditional diagnostics exhibit nonstationarity, those diagnostics are used to fit appropriate mixing functions for a covariance kernel mixture designed to capture the nonstationarity, ensuring an emulator that is continuous in parameter space and readily interpretable. We compare our approach to the principal nonstationary GP models in the literature and illustrate its performance on a number of idealised test cases and in an application to modelling the cloud parameterization of the French climate model.

قيم البحث

اقرأ أيضاً

The use of a finite mixture of normal distributions in model-based clustering allows to capture non-Gaussian data clusters. However, identifying the clusters from the normal components is challenging and in general either achieved by imposing constra ints on the model or by using post-processing procedures. Within the Bayesian framework we propose a different approach based on sparse finite mixtures to achieve identifiability. We specify a hierarchical prior where the hyperparameters are carefully selected such that they are reflective of the cluster structure aimed at. In addition this prior allows to estimate the model using standard MCMC sampling methods. In combination with a post-processing approach which resolves the label switching issue and results in an identified model, our approach allows to simultaneously (1) determine the number of clusters, (2) flexibly approximate the cluster distributions in a semi-parametric way using finite mixtures of normals and (3) identify cluster-specific parameters and classify observations. The proposed approach is illustrated in two simulation studies and on benchmark data sets.
Bilateral filtering (BF) is one of the most classical denoising filters, however, the manually initialized filtering kernel hampers its adaptivity across images with various characteristics. To deal with image variation (i.e., non-stationary noise), in this paper, we propose multi-kernel filter (MKF) which adapts filtering kernels to specific image characteristics automatically. The design of MKF takes inspiration from adaptive mechanisms of human vision that make full use of information in a visual context. More specifically, for simulating the visual context and its adaptive function, we construct the image context based on which we simulate the contextual impact on filtering kernels. We first design a hierarchically clustering algorithm to generate a hierarchy of large to small coherent image patches, organized as a cluster tree, so that obtain multi-scale image representation. The leaf cluster and corresponding predecessor clusters are used to generate one of multiple range kernels that are capable of catering to image variation. At first, we design a hierarchically clustering framework to generate a hierarchy of large to small coherent image patches that organized as a cluster tree, so that obtain multi-scale image representation, i.e., the image context. Next, a leaf cluster is used to generate one of the multiple kernels, and two corresponding predecessor clusters are used to fine-tune the adopted kernel. Ultimately, the single spatially-invariant kernel in BF becomes multiple spatially-varying ones. We evaluate MKF on two public datasets, BSD300 and BrainWeb which are added integrally-varying noise and spatially-varying noise, respectively. Extensive experiments show that MKF outperforms state-of-the-art filters w.r.t. both mean absolute error and structural similarity.
An emulator is a fast-to-evaluate statistical approximation of a detailed mathematical model (simulator). When used in lieu of simulators, emulators can expedite tasks that require many repeated evaluations, such as sensitivity analyses, policy optim ization, model calibration, and value-of-information analyses. Emulators are developed using the output of simulators at specific input values (design points). Developing an emulator that closely approximates the simulator can require many design points, which becomes computationally expensive. We describe a self-terminating active learning algorithm to efficiently develop emulators tailored to a specific emulation task, and compare it with algorithms that optimize geometric criteria (random latin hypercube sampling and maximum projection designs) and other active learning algorithms (treed Gaussian Processes that optimize typical active learning criteria). We compared the algorithms root mean square error (RMSE) and maximum absolute deviation from the simulator (MAX) for seven benchmark functions and in a prostate cancer screening model. In the empirical analyses, in simulators with greatly-varying smoothness over the input domain, active learning algorithms resulted in emulators with smaller RMSE and MAX for the same number of design points. In all other cases, all algorithms performed comparably. The proposed algorithm attained satisfactory performance in all analyses, had smaller variability than the treed Gaussian Processes (it is deterministic), and, on average, had similar or better performance as the treed Gaussian Processes in 6 out of 7 benchmark functions and in the prostate cancer model.
We determine the expected error by smoothing the data locally. Then we optimize the shape of the kernel smoother to minimize the error. Because the optimal estimator depends on the unknown function, our scheme automatically adjusts to the unknown fun ction. By self-consistently adjusting the kernel smoother, the total estimator adapts to the data. Goodness of fit estimators select a kernel halfwidth by minimizing a function of the halfwidth which is based on the average square residual fit error: $ASR(h)$. A penalty term is included to adjust for using the same data to estimate the function and to evaluate the mean square error. Goodness of fit estimators are relatively simple to implement, but the minimum (of the goodness of fit functional) tends to be sensitive to small perturbations. To remedy this sensitivity problem, we fit the mean square error %goodness of fit functional to a two parameter model prior to determining the optimal halfwidth. Plug-in derivative estimators estimate the second derivative of the unknown function in an initial step, and then substitute this estimate into the asymptotic formula.
A framework is presented to model instances and degrees of local item dependence within the context of diagnostic classification models (DCMs). The study considers an undirected graphical model to describe dependent structure of test items and draws inference based on pseudo-likelihood. The new modeling framework explicitly addresses item interactions beyond those explained by latent classes and thus is more flexible and robust against the violation of local independence. It also facilitates concise interpretation of item relations by regulating complexity of a network underlying the test items. The viability and effectiveness are demonstrated via simulation and a real data example. Results from the simulation study suggest that the proposed methods adequately recover the model parameters in the presence of locally dependent items and lead to a substantial improvement in estimation accuracy compared to the standard DCM approach. The analysis of real data demonstrates that the graphical DCM provides a useful summary of item interactions in regards to the existence and extent of local dependence.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا