Do you want to publish a course? Click here

On the Inference of Applying Gaussian Process Modeling to a Deterministic Function

124   0   0.0 ( 0 )
 Added by Wenjia Wang
 Publication date 2020
and research's language is English
 Authors Wenjia Wang




Ask ChatGPT about the research

Gaussian process modeling is a standard tool for building emulators for computer experiments, which are usually used to study deterministic functions, for example, a solution to a given system of partial differential equations. This work investigates applying Gaussian process modeling to a deterministic function from prediction and uncertainty quantification perspectives, where the Gaussian process model is misspecified. Specifically, we consider the case where the underlying function is fixed and from a reproducing kernel Hilbert space generated by some kernel function, and the same kernel function is used in the Gaussian process modeling as the correlation function for prediction and uncertainty quantification. While upper bounds and optimal convergence rate of prediction in the Gaussian process modeling have been extensively studied in the literature, a thorough exploration of convergence rates and theoretical study of uncertainty quantification is lacking. We prove that, if one uses maximum likelihood estimation to estimate the variance in Gaussian process modeling, under different choices of the nugget parameter value, the predictor is not optimal and/or the confidence interval is not reliable. In particular, lower bounds of the prediction error under different choices of the nugget parameter value are obtained. The results indicate that, if one directly applies Gaussian process modeling to a fixed function, the reliability of the confidence interval and the optimality of the predictor cannot be achieved at the same time.



rate research

Read More

In this paper we introduce a novel model for Gaussian process (GP) regression in the fully Bayesian setting. Motivated by the ideas of sparsification, localization and Bayesian additive modeling, our model is built around a recursive partitioning (RP) scheme. Within each RP partition, a sparse GP (SGP) regression model is fitted. A Bayesian additive framework then combines multiple layers of partitioned SGPs, capturing both global trends and local refinements with efficient computations. The model addresses both the problem of efficiency in fitting a full Gaussian process regression model and the problem of prediction performance associated with a single SGP. Our approach mitigates the issue of pseudo-input selection and avoids the need for complex inter-block correlations in existing methods. The crucial trade-off becomes choosing between many simpler local model components or fewer complex global model components, which the practitioner can sensibly tune. Implementation is via a Metropolis-Hasting Markov chain Monte-Carlo algorithm with Bayesian back-fitting. We compare our model against popular alternatives on simulated and real datasets, and find the performance is competitive, while the fully Bayesian procedure enables the quantification of model uncertainties.
145 - Jean-Marc Azais 2018
We consider the semi-parametric estimation of a scale parameter of a one-dimensional Gaussian process with known smoothness. We suggest an estimator based on quadratic variations and on the moment method. We provide asymptotic approximations of the mean and variance of this estimator, together with asymptotic normality results, for a large class of Gaussian processes. We allow for general mean functions and study the aggregation of several estimators based on various variation sequences. In extensive simulation studies, we show that the asymptotic results accurately depict thefinite-sample situations already for small to moderate sample sizes. We also compare various variation sequences and highlight the efficiency of the aggregation procedure.
In this paper, we investigate Gaussian process modeling with input location error, where the inputs are corrupted by noise. Here, the best linear unbiased predictor for two cases is considered, according to whether there is noise at the target unobserved location or not. We show that the mean squared prediction error converges to a non-zero constant if there is noise at the target unobserved location, and provide an upper bound of the mean squared prediction error if there is no noise at the target unobserved location. We investigate the use of stochastic Kriging in the prediction of Gaussian processes with input location error, and show that stochastic Kriging is a good approximation when the sample size is large. Several numeric examples are given to illustrate the results, and a case study on the assembly of composite parts is presented. Technical proofs are provided in the Appendix.
In this paper, we prove almost surely consistency of a Survival Analysis model, which puts a Gaussian process, mapped to the unit interval, as a prior on the so-called hazard function. We assume our data is given by survival lifetimes $T$ belonging to $mathbb{R}^{+}$, and covariates on $[0,1]^d$, where $d$ is an arbitrary dimension. We define an appropriate metric for survival functions and prove posterior consistency with respect to this metric. Our proof is based on an extension of the theorem of Schwartz (1965), which gives general conditions for proving almost surely consistency in the setting of non i.i.d random variables. Due to the nature of our data, several results for Gaussian processes on $mathbb{R}^+$ are proved which may be of independent interest.
In this paper we are interested in the Maximum Likelihood Estimator (MLE) of the vector parameter of an autoregressive process of order $p$ with regular stationary Gaussian noise. We exhibit the large sample asymptotical properties of the MLE under very mild conditions. Simulations are done for fractional Gaussian noise (fGn), autoregressive noise (AR(1)) and moving average noise (MA(1)).
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا