Do you want to publish a course? Click here

Numerical studies of the metamodel fitting and validation processes

93   0   0.0 ( 0 )
 Added by Bertrand Iooss
 Publication date 2010
  fields
and research's language is English




Ask ChatGPT about the research

Complex computer codes, for instance simulating physical phenomena, are often too time expensive to be directly used to perform uncertainty, sensitivity, optimization and robustness analyses. A widely accepted method to circumvent this problem consists in replacing cpu time expensive computer models by cpu inexpensive mathematical functions, called metamodels. In this paper, we focus on the Gaussian process metamodel and two essential steps of its definition phase. First, the initial design of the computer code input variables (which allows to fit the metamodel) has to honor adequate space filling properties. We adopt a numerical approach to compare the performance of different types of space filling designs, in the class of the optimal Latin hypercube samples, in terms of the predictivity of the subsequent fitted metamodel. We conclude that such samples with minimal wrap-around discrepancy are particularly well-suited for the Gaussian process metamodel fitting. Second, the metamodel validation process consists in evaluating the metamodel predictivity with respect to the initial computer code. We propose and test an algorithm which optimizes the distance between the validation points and the metamodel learning points in order to estimate the true metamodel predictivity with a minimum number of validation points. Comparisons with classical validation algorithms and application to a nuclear safety computer code show the relevance of this new sequential validation design.



rate research

Read More

This paper presents the first general (supervised) statistical learning framework for point processes in general spaces. Our approach is based on the combination of two new concepts, which we define in the paper: i) bivariate innovations, which are measures of discrepancy/prediction-accuracy between two point processes, and ii) point process cross-validation (CV), which we here define through point process thinning. The general idea is to carry out the fitting by predicting CV-generated validation sets using the corresponding training sets; the prediction error, which we minimise, is measured by means of bivariate innovations. Having established various theoretical properties of our bivariate innovations, we study in detail the case where the CV procedure is obtained through independent thinning and we apply our statistical learning methodology to three typical spatial statistical settings, namely parametric intensity estimation, non-parametric intensity estimation and Papangelou conditional intensity fitting. Aside from deriving theoretical properties related to these cases, in each of them we numerically show that our statistical learning approach outperforms the state of the art in terms of mean (integrated) squared error.
Monitoring several correlated quality characteristics of a process is common in modern manufacturing and service industries. Although a lot of attention has been paid to monitoring the multivariate process mean, not many control charts are available for monitoring the covariance matrix. This paper presents a comprehensive overview of the literature on control charts for monitoring the covariance matrix in a multivariate statistical process monitoring (MSPM) framework. It classifies the research that has previously appeared in the literature. We highlight the challenging areas for research and provide some directions for future research.
Replicability analysis aims to identify the findings that replicated across independent studies that examine the same features. We provide powerful novel replicability analysis procedures for two studies for FWER and for FDR control on the replicability claims. The suggested procedures first select the promising features from each study solely based on that study, and then test for replicability only the features that were selected in both studies. We incorporate the plug-in estimates of the fraction of null hypotheses in one study among the selected hypotheses by the other study. Since the fraction of nulls in one study among the selected features from the other study is typically small, the power gain can be remarkable. We provide theoretical guarantees for the control of the appropriate error rates, as well as simulations that demonstrate the excellent power properties of the suggested procedures. We demonstrate the usefulness of our procedures on real data examples from two application fields: behavioural genetics and microarray studies.
74 - V. Jain , M. Gerritsma 2021
We give a derivation for the value of inf-sup constant for the bilinear form (p, div u). We prove that the value of inf-sup constant is equal to 1.0 in all cases and is independent of the size and shape of the domain. Numerical tests for validation of inf-sup constants is performed using finite dimensional spaces defined in cite{2020jain} on two test domains i) a square of size $Omega = [0,1]^2$, ii) a square of size $Omega = [0,2]^2$, for varying mesh sizes and polynomial degrees. The numeric values are in agreement with the theoretical value of inf-sup term.
We propose a new numerical scheme for Backward Stochastic Differential Equations based on branching processes. We approximate an arbitrary (Lipschitz) driver by local polynomials and then use a Picard iteration scheme. Each step of the Picard iteration can be solved by using a representation in terms of branching diffusion systems, thus avoiding the need for a fine time discretization. In contrast to the previous literature on the numerical resolution of BSDEs based on branching processes, we prove the convergence of our numerical scheme without limitation on the time horizon. Numerical simulations are provided to illustrate the performance of the algorithm.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا