Do you want to publish a course? Click here

Computation of the expected value of a function of a chi-distributed random variable

276   0   0.0 ( 0 )
 Added by Paul Kabaila
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We consider the problem of numerically evaluating the expected value of a smooth bounded function of a chi-distributed random variable, divided by the square root of the number of degrees of freedom. This problem arises in the contexts of simultaneous inference, the selection and ranking of populations and in the evaluation of multivariate t probabilities. It also arises in the assessment of the coverage probability and expected volume properties of the some non-standard confidence regions. We use a transformation put forward by Mori, followed by the application of the trapezoidal rule. This rule has the remarkable property that, for suitable integrands, it is exponentially convergent. We use it to create a nested sequence of quadrature rules, for the estimation of the approximation error, so that previous evaluations of the integrand are not wasted. The application of the trapezoidal rule requires the approximation of an infinite sum by a finite sum. We provide a new easily computed upper bound on the error of this approximation. Our overall conclusion is that this method is a very suitable candidate for the computation of the coverage and expected volume properties of non-standard confidence regions.

rate research

Read More

187 - Elise Janvresse 2008
A random Fibonacci sequence is defined by the relation g_n = | g_{n-1} +/- g_{n-2} |, where the +/- sign is chosen by tossing a balanced coin for each n. We generalize these sequences to the case when the coin is unbalanced (denoting by p the probability of a +), and the recurrence relation is of the form g_n = |lambda g_{n-1} +/- g_{n-2} |. When lambda >=2 and 0 < p <= 1, we prove that the expected value of g_n grows exponentially fast. When lambda = lambda_k = 2 cos(pi/k) for some fixed integer k>2, we show that the expected value of g_n grows exponentially fast for p>(2-lambda_k)/4 and give an algebraic expression for the growth rate. The involved methods extend (and correct) those introduced in a previous paper by the second author.
Gaussian Markov random fields are used in a large number of disciplines in machine vision and spatial statistics. The models take advantage of sparsity in matrices introduced through the Markov assumptions, and all operations in inference and prediction use sparse linear algebra operations that scale well with dimensionality. Yet, for very high-dimensional models, exact computation of predictive variances of linear combinations of variables is generally computationally prohibitive, and approximate methods (generally interpolation or conditional simulation) are typically used instead. A set of conditions are established under which the variances of linear combinations of random variables can be computed exactly using the Takahashi recursions. The ensuing computational simplification has wide applicability and may be used to enhance several software packages where model fitting is seated in a maximum-likelihood framework. The resulting algorithm is ideal for use in a variety of spatial statistical applications, including emph{LatticeKrig} modelling, statistical downscaling, and fixed rank kriging. It can compute hundreds of thousands exact predictive variances of linear combinations on a standard desktop with ease, even when large spatial GMRF models are used.
We apply the holonomic gradient method to compute the distribution function of a weighted sum of independent noncentral chi-square random variables. It is the distribution function of the squared length of a multivariate normal random vector. We treat this distribution as an integral of the normalizing constant of the Fisher-Bingham distribution on the unit sphere and make use of the partial differential equations for the Fisher-Bingham distribution.
Computing the expectation of kernel functions is a ubiquitous task in machine learning, with applications from classical support vector machines to exploiting kernel embeddings of distributions in probabilistic modeling, statistical inference, causal discovery, and deep learning. In all these scenarios, we tend to resort to Monte Carlo estimates as expectations of kernels are intractable in general. In this work, we characterize the conditions under which we can compute expected kernels exactly and efficiently, by leveraging recent advances in probabilistic circuit representations. We first construct a circuit representation for kernels and propose an approach to such tractable computation. We then demonstrate possible advancements for kernel embedding frameworks by exploiting tractable expected kernels to derive new algorithms for two challenging scenarios: 1) reasoning under missing data with kernel support vector regressors; 2) devising a collapsed black-box importance sampling scheme. Finally, we empirically evaluate both algorithms and show that they outperform standard baselines on a variety of datasets.
Recent years have seen a huge development in spatial modelling and prediction methodology, driven by the increased availability of remote-sensing data and the reduced cost of distributed-processing technology. It is well known that modelling and prediction using infinite-dimensional process models is not possible with large data sets, and that both approximate models and, often, approximate-inference methods, are needed. The problem of fitting simple global spatial models to large data sets has been solved through the likes of multi-resolution approximations and nearest-neighbour techniques. Here we tackle the next challenge, that of fitting complex, nonstationary, multi-scale models to large data sets. We propose doing this through the use of superpositions of spatial processes with increasing spatial scale and increasing degrees of nonstationarity. Computation is facilitated through the use of Gaussian Markov random fields and parallel Markov chain Monte Carlo based on graph colouring. The resulting model allows for both distributed computing and distributed data. Importantly, it provides opportunities for genuine model and data scaleability and yet is still able to borrow strength across large spatial scales. We illustrate a two-scale version on a data set of sea-surface temperature containing on the order of one million observations, and compare our approach to state-of-the-art spatial modelling and prediction methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا