Do you want to publish a course? Click here

Efficient Calculation of the Joint Distribution of Order Statistics

90   0   0.0 ( 0 )
 Added by Thorsten Dickhaus
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We consider the problem of computing the joint distribution of order statistics of stochastically independent random variables in one- and two-group models. While recursive formulas for evaluating the joint cumulative distribution function of such order statistics exist in the literature for a longer time, their numerical implementation remains a challenging task. We tackle this task by presenting novel generalizations of known recursions which we utilize to obtain exact results (calculated in rational arithmetic) as well as faithfully rounded results. Finally, some applications in stepwise multiple hypothesis testing are discussed.

rate research

Read More

98 - Xinjia Chen , Kemin Zhou 2008
Order statistics theory is applied in this paper to probabilistic robust control theory to compute the minimum sample size needed to come up with a reliable estimate of an uncertain quantity under continuity assumption of the related probability distribution. Also, the concept of distribution-free tolerance intervals is applied to estimate the range of an uncertain quantity and extract the information about its distribution. To overcome the limitations imposed by the continuity assumption in the existing order statistics theory, we have derived a cumulative distribution function of the order statistics without the continuity assumption and developed an inequality showing that this distribution has an upper bound which equals to the corresponding distribution when the continuity assumption is satisfied. By applying this inequality, we investigate the minimum computational effort needed to come up with an reliable estimate for the upper bound (or lower bound) and the range of a quantity. We also give conditions, which are much weaker than the absolute continuity assumption, for the existence of such minimum sample size. Furthermore, the issue of making tradeoff between performance level and risk is addressed and a guideline for making this kind of tradeoff is established. This guideline can be applied in general without continuity assumption.
We give the distribution of $M_n$, the maximum of a sequence of $n$ observations from a moving average of order 1. Solutions are first given in terms of repeated integrals and then for the case where the underlying independent random variables have an absolutely continuous density. When the correlation is positive, $$ P(M_n %max^n_{i=1} X_i leq x) = sum_{j=1}^infty beta_{jx} u_{jx}^{n} approx B_{x} u_{1x}^{n} $$ where %${X_i}$ is a moving average of order 1 with positive correlation, and ${ u_{jx}}$ are the eigenvalues (singular values) of a Fredholm kernel and $ u_{1x}$ is the eigenvalue of maximum magnitude. A similar result is given when the correlation is negative. The result is analogous to large deviations expansions for estimates, since the maximum need not be standardized to have a limit. % there are more terms, and $$P(M_n <x) approx B_{x} (1+ u_{1x})^n.$$ For the continuous case the integral equations for the left and right eigenfunctions are converted to first order linear differential equations. The eigenvalues satisfy an equation of the form $$sum_{i=1}^infty w_i(lambda-theta_i)^{-1}=lambda-theta_0$$ for certain known weights ${w_i}$ and eigenvalues ${theta_i}$ of a given matrix. This can be solved by truncating the sum to an increasing number of terms.
We give the distribution of $M_n$, the maximum of a sequence of $n$ observations from a moving average of order 1. Solutions are first given in terms of repeated integrals and then for the case where the underlying independent random variables are discrete. When the correlation is positive, $$ P(M_n max^n_{i=1} X_i leq x) = sum_{j=1}^infty beta_{jx} u_{jx}^{n} approx B_{x} r{1x}^{n} $$ where ${ u_{jx}}$ are the eigenvalues of a certain matrix, $r_{1x}$ is the maximum magnitude of the eigenvalues, and $I$ depends on the number of possible values of the underlying random variables. The eigenvalues do not depend on $x$ only on its range.
102 - Jean-Michel Marin 2012
Among Monte Carlo techniques, the importance sampling requires fine tuning of a proposal distribution, which is now fluently resolved through iterative schemes. The Adaptive Multiple Importance Sampling (AMIS) of Cornuet et al. (2012) provides a significant improvement in stability and effective sample size due to the introduction of a recycling procedure. However, the consistency of the AMIS estimator remains largely open. In this work we prove the convergence of the AMIS, at a cost of a slight modification in the learning process. Contrary to Douc et al. (2007a), results are obtained here in the asymptotic regime where the number of iterations is going to infinity while the number of drawings per iteration is a fixed, but growing sequence of integers. Hence some of the results shed new light on adaptive population Monte Carlo algorithms in that last regime.
We gather several results on the eigenvalues of the spatial sign covariance matrix of an elliptical distribution. It is shown that the eigenvalues are a one-to-one function of the eigenvalues of the shape matrix and that they are closer together than the latter. We further provide a one-dimensional integral representation of the eigenvalues, which facilitates their numerical computation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا