Do you want to publish a course? Click here

The volume-of-tube method for Gaussian random fields with inhomogeneous variance

222   0   0.0 ( 0 )
 Added by Satoshi Kuriki
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The tube method or the volume-of-tube method approximates the tail probability of the maximum of a smooth Gaussian random field with zero mean and unit variance. This method evaluates the volume of a spherical tube about the index set, and then transforms it to the tail probability. In this study, we generalize the tube method to a case in which the variance is not constant. We provide the volume formula for a spherical tube with a non-constant radius in terms of curvature tensors, and the tail probability formula of the maximum of a Gaussian random field with inhomogeneous variance, as well as its Laplace approximation. In particular, the critical radius of the tube is generalized for evaluation of the asymptotic approximation error. As an example, we discuss the approximation of the largest eigenvalue distribution of the Wishart matrix with a non-identity matrix parameter. The Bonferroni method is the tube method when the index set is a finite set. We provide the formula for the asymptotic approximation error for the Bonferroni method when the variance is not constant.



rate research

Read More

This paper is concerned with the existence of multiple points of Gaussian random fields. Under the framework of Dalang et al. (2017), we prove that, for a wide class of Gaussian random fields, multiple points do not exist in critical dimensions. The result is applicable to fractional Brownian sheets and the solutions of systems of stochastic heat and wave equations.
Our problem is to find a good approximation to the P-value of the maximum of a random field of test statistics for a cone alternative at each point in a sample of Gaussian random fields. These test statistics have been proposed in the neuroscience literature for the analysis of fMRI data allowing for unknown delay in the hemodynamic response. However the null distribution of the maximum of this 3D random field of test statistics, and hence the threshold used to detect brain activation, was unsolved. To find a solution, we approximate the P-value by the expected Euler characteristic (EC) of the excursion set of the test statistic random field. Our main result is the required EC density, derived using the Gaussian Kinematic Formula.
We derive exact asymptotics of $$mathbb{P}left(sup_{tin mathcal{A}}X(t)>uright), ~text{as}~ utoinfty,$$ for a centered Gaussian field $X(t),~tin mathcal{A}subsetmathbb{R}^n$, $n>1$ with continuous sample paths a.s. and general dependence structure, for which $arg max_{tin {mathcal{A}}} Var(X(t))$ is a Jordan set with finite and positive Lebesque measure of dimension $kleq n$. Our findings are applied to deriving the asymptotics of tail probabilities related to performance tables and dependent chi processes.
We present a general framework for uncertainty quantification that is a mosaic of interconnected models. We define global first and second order structural and correlative sensitivity analyses for random counting measures acting on risk functionals of input-output maps. These are the ANOVA decomposition of the intensity measure and the decomposition of the random measure variance, each into subspaces. Orthogonal random measures furnish sensitivity distributions. We show that the random counting measure may be used to construct positive random fields, which admit decompositions of covariance and sensitivity indices and may be used to represent interacting particle systems. The first and second order global sensitivity analyses conveyed through random counting measures elucidate and integrate different notions of uncertainty quantification, and the global sensitivity analysis of random fields conveys the proportionate functional contributions to covariance. This framework complements others when used in conjunction with for instance algorithmic uncertainty and model selection uncertainty frameworks.
We prove an analogue of the classical ballot theorem that holds for any random walk in the range of attraction of the normal distribution. Our result is best possible: we exhibit examples demonstrating that if any of our hypotheses are removed, our conclusions may no longer hold.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا