ترغب بنشر مسار تعليمي؟ اضغط هنا

Gaussian Processes with Input Location Error and Applications to the Composite Parts Assembly Process

161   0   0.0 ( 0 )
 نشر من قبل Wenjia Wang
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we investigate Gaussian process modeling with input location error, where the inputs are corrupted by noise. Here, the best linear unbiased predictor for two cases is considered, according to whether there is noise at the target unobserved location or not. We show that the mean squared prediction error converges to a non-zero constant if there is noise at the target unobserved location, and provide an upper bound of the mean squared prediction error if there is no noise at the target unobserved location. We investigate the use of stochastic Kriging in the prediction of Gaussian processes with input location error, and show that stochastic Kriging is a good approximation when the sample size is large. Several numeric examples are given to illustrate the results, and a case study on the assembly of composite parts is presented. Technical proofs are provided in the Appendix.



قيم البحث

اقرأ أيضاً

Many popular robust estimators are $U$-quantiles, most notably the Hodges-Lehmann location estimator and the $Q_n$ scale estimator. We prove a functional central limit theorem for the sequential $U$-quantile process without any moment assumptions and under weak short-range dependence conditions. We further devise an estimator for the long-run variance and show its consistency, from which the convergence of the studentized version of the sequential $U$-quantile process to a standard Brownian motion follows. This result can be used to construct CUSUM-type change-point tests based on $U$-quantiles, which do not rely on bootstrapping procedures. We demonstrate this approach in detail at the example of the Hodges-Lehmann estimator for robustly detecting changes in the central location. A simulation study confirms the very good robustness and efficiency properties of the test. Two real-life data sets are analyzed.
123 - Wenjia Wang 2020
Gaussian process modeling is a standard tool for building emulators for computer experiments, which are usually used to study deterministic functions, for example, a solution to a given system of partial differential equations. This work investigates applying Gaussian process modeling to a deterministic function from prediction and uncertainty quantification perspectives, where the Gaussian process model is misspecified. Specifically, we consider the case where the underlying function is fixed and from a reproducing kernel Hilbert space generated by some kernel function, and the same kernel function is used in the Gaussian process modeling as the correlation function for prediction and uncertainty quantification. While upper bounds and optimal convergence rate of prediction in the Gaussian process modeling have been extensively studied in the literature, a thorough exploration of convergence rates and theoretical study of uncertainty quantification is lacking. We prove that, if one uses maximum likelihood estimation to estimate the variance in Gaussian process modeling, under different choices of the nugget parameter value, the predictor is not optimal and/or the confidence interval is not reliable. In particular, lower bounds of the prediction error under different choices of the nugget parameter value are obtained. The results indicate that, if one directly applies Gaussian process modeling to a fixed function, the reliability of the confidence interval and the optimality of the predictor cannot be achieved at the same time.
In this paper we introduce a novel model for Gaussian process (GP) regression in the fully Bayesian setting. Motivated by the ideas of sparsification, localization and Bayesian additive modeling, our model is built around a recursive partitioning (RP ) scheme. Within each RP partition, a sparse GP (SGP) regression model is fitted. A Bayesian additive framework then combines multiple layers of partitioned SGPs, capturing both global trends and local refinements with efficient computations. The model addresses both the problem of efficiency in fitting a full Gaussian process regression model and the problem of prediction performance associated with a single SGP. Our approach mitigates the issue of pseudo-input selection and avoids the need for complex inter-block correlations in existing methods. The crucial trade-off becomes choosing between many simpler local model components or fewer complex global model components, which the practitioner can sensibly tune. Implementation is via a Metropolis-Hasting Markov chain Monte-Carlo algorithm with Bayesian back-fitting. We compare our model against popular alternatives on simulated and real datasets, and find the performance is competitive, while the fully Bayesian procedure enables the quantification of model uncertainties.
242 - Salim Bouzebda 2011
We provide the strong approximation of empirical copula processes by a Gaussian process. In addition we establish a strong approximation of the smoothed empirical copula processes and a law of iterated logarithm.
84 - A. Amiri 2020
We are interested in estimating the location of what we call smooth change-point from $n$ independent observations of an inhomogeneous Poisson process. The smooth change-point is a transition of the intensity function of the process from one level to another which happens smoothly, but over such a small interval, that its length $delta_n$ is considered to be decreasing to $0$ as $nto+infty$. We show that if $delta_n$ goes to zero slower than $1/n$, our model is locally asymptotically normal (with a rather unusual rate $sqrt{delta_n/n}$), and the maximum likelihood and Bayesian estimators are consistent, asymptotically normal and asymptotically efficient. If, on the contrary, $delta_n$ goes to zero faster than $1/n$, our model is non-regular and behaves like a change-point model. More precisely, in this case we show that the Bayesian estimators are consistent, converge at rate $1/n$, have non-Gaussian limit distributions and are asymptotically efficient. All these results are obtained using the likelihood ratio analysis method of Ibragimov and Khasminskii, which equally yields the convergence of polynomial moments of the considered estimators. However, in order to study the maximum likelihood estimator in the case where $delta_n$ goes to zero faster than $1/n$, this method cannot be applied using the usual topologies of convergence in functional spaces. So, this study should go through the use of an alternative topology and will be considered in a future work.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا