No Arabic abstract
Stochastic kriging is a popular technique for simulation metamodeling due to its exibility and analytical tractability. Its computational bottleneck is the inversion of a covariance matrix, which takes $O(n^3)$ time in general and becomes prohibitive for large n, where n is the number of design points. Moreover, the covariance matrix is often ill-conditioned for large n, and thus the inversion is prone to numerical instability, resulting in erroneous parameter estimation and prediction. These two numerical issues preclude the use of stochastic kriging at a large scale. This paper presents a novel approach to address them. We construct a class of covariance functions, called Markovian covariance functions (MCFs), which have two properties: (i) the associated covariance matrices can be inverted analytically, and (ii) the inverse matrices are sparse. With the use of MCFs, the inversion-related computational time is reduced to $O(n^2)$ in general, and can be further reduced by orders of magnitude with additional assumptions on the simulation errors and design points. The analytical invertibility also enhance the numerical stability dramatically. The key in our approach is that we identify a general functional form of covariance functions that can induce sparsity in the corresponding inverse matrices. We also establish a connection between MCFs and linear ordinary differential equations. Such a connection provides a flexible, principled approach to constructing a wide class of MCFs. Extensive numerical experiments demonstrate that stochastic kriging with MCFs can handle large-scale problems in an both computationally efficient and numerically stable manner.
Stochastic kriging is a popular metamodeling technique for representing the unknown response surface of a simulation model. However, the simulation model may be inadequate in the sense that there may be a non-negligible discrepancy between it and the real system of interest. Failing to account for the model discrepancy may conceivably result in erroneous prediction of the real systems performance and mislead the decision-making process. This paper proposes a metamodel that extends stochastic kriging to incorporate the model discrepancy. Both the simulation outputs and the real data are used to characterize the model discrepancy. The proposed metamodel can provably enhance the prediction of the real systems performance. We derive general results for experiment design and analysis, and demonstrate the advantage of the proposed metamodel relative to competing methods. Finally, we study the effect of Common Random Numbers (CRN). The use of CRN is well known to be detrimental to the prediction accuracy of stochastic kriging in general. By contrast, we show that the effect of CRN in the new context is substantially more complex. The use of CRN can be either detrimental or beneficial depending on the interplay between the magnitude of the observation errors and other parameters involved.
Kriging (or Gaussian process regression) is a popular machine learning method for its flexibility and closed-form prediction expressions. However, one of the key challenges in applying kriging to engineering systems is that the available measurement data is scarce due to the measurement limitations and high sensing costs. On the other hand, physical knowledge of the engineering system is often available and represented in the form of partial differential equations (PDEs). We present in this work a PDE Informed Kriging model (PIK), which introduces PDE information via a set of PDE points and conducts posterior prediction similar to the standard kriging method. The proposed PIK model can incorporate physical knowledge from both linear and nonlinear PDEs. To further improve learning performance, we propose an Active PIK framework (APIK) that designs PDE points to leverage the PDE information based on the PIK model and measurement data. The selected PDE points not only explore the whole input space but also exploit the locations where the PDE information is critical in reducing predictive uncertainty. Finally, an expectation-maximization algorithm is developed for parameter estimation. We demonstrate the effectiveness of APIK in two synthetic examples, a shock wave case study, and a laser heating case study.
Scientists and engineers commonly use simulation models to study real systems for which actual experimentation is costly, difficult, or impossible. Many simulations are stochastic in the sense that repeated runs with the same input configuration will result in different outputs. For expensive or time-consuming simulations, stochastic kriging citep{ankenman} is commonly used to generate predictions for simulation model outputs subject to uncertainty due to both function approximation and stochastic variation. Here, we develop and justify a few guidelines for experimental design, which ensure accuracy of stochastic kriging emulators. We decompose error in stochastic kriging predictions into nominal, numeric, parameter estimation and parameter estimation numeric components and provide means to control each in terms of properties of the underlying experimental design. The design properties implied for each source of error are weakly conflicting and broad principles are proposed. In brief, space-filling properties small fill distance and large separation distance should balance with replication at distinct input configurations, with number of replications depending on the relative magnitudes of stochastic and process variability. Non-stationarity implies higher input density in more active regions, while regression functions imply a balance with traditional design properties. A few examples are presented to illustrate the results.
The cost of both generalized least squares (GLS) and Gibbs sampling in a crossed random effects model can easily grow faster than $N^{3/2}$ for $N$ observations. Ghosh et al. (2020) develop a backfitting algorithm that reduces the cost to $O(N)$. Here we extend that method to a generalized linear mixed model for logistic regression. We use backfitting within an iteratively reweighted penalized least square algorithm. The specific approach is a version of penalized quasi-likelihood due to Schall (1991). A straightforward version of Schalls algorithm would also cost more than $N^{3/2}$ because it requires the trace of the inverse of a large matrix. We approximate that quantity at cost $O(N)$ and prove that this substitution makes an asymptotically negligible difference. Our backfitting algorithm also collapses the fixed effect with one random effect at a time in a way that is analogous to the collapsed Gibbs sampler of Papaspiliopoulos et al. (2020). We use a symmetric operator that facilitates efficient covariance computation. We illustrate our method on a real dataset from Stitch Fix. By properly accounting for crossed random effects we show that a naive logistic regression could underestimate sampling variances by several hundred fold.
We propose a latent topic model with a Markovian transition for process data, which consist of time-stamped events recorded in a log file. Such data are becoming more widely available in computer-based educational assessment with complex problem solving items. The proposed model can be viewed as an extension of the hierarchical Bayesian topic model with a hidden Markov structure to accommodate the underlying evolution of an examinees latent state. Using topic transition probabilities along with response times enables us to capture examinees learning trajectories, making clustering/classification more efficient. A forward-backward variational expectation-maximization (FB-VEM) algorithm is developed to tackle the challenging computational problem. Useful theoretical properties are established under certain asymptotic regimes. The proposed method is applied to a complex problem solving item in 2012 Programme for International Student Assessment (PISA 2012).