ترغب بنشر مسار تعليمي؟ اضغط هنا

For a skew normal random sequence, convergence rates of the distribution of its partial maximum to the Gumbel extreme value distribution are derived. The asymptotic expansion of the distribution of the normalized maximum is given under an optimal cho ice of norming constants. We find that the optimal convergence rate of the normalized maximum to the Gumbel extreme value distribution is proportional to $1/log n$.
We give the distribution of $M_n$, the maximum of a sequence of $n$ observations from a moving average of order 1. Solutions are first given in terms of repeated integrals and then for the case where the underlying independent random variables have a n absolutely continuous density. When the correlation is positive, $$ P(M_n %max^n_{i=1} X_i leq x) = sum_{j=1}^infty beta_{jx} u_{jx}^{n} approx B_{x} u_{1x}^{n} $$ where %${X_i}$ is a moving average of order 1 with positive correlation, and ${ u_{jx}}$ are the eigenvalues (singular values) of a Fredholm kernel and $ u_{1x}$ is the eigenvalue of maximum magnitude. A similar result is given when the correlation is negative. The result is analogous to large deviations expansions for estimates, since the maximum need not be standardized to have a limit. % there are more terms, and $$P(M_n <x) approx B_{x} (1+ u_{1x})^n.$$ For the continuous case the integral equations for the left and right eigenfunctions are converted to first order linear differential equations. The eigenvalues satisfy an equation of the form $$sum_{i=1}^infty w_i(lambda-theta_i)^{-1}=lambda-theta_0$$ for certain known weights ${w_i}$ and eigenvalues ${theta_i}$ of a given matrix. This can be solved by truncating the sum to an increasing number of terms.
We give the distribution of $M_n$, the maximum of a sequence of $n$ observations from a moving average of order 1. Solutions are first given in terms of repeated integrals and then for the case where the underlying independent random variables are di screte. When the correlation is positive, $$ P(M_n max^n_{i=1} X_i leq x) = sum_{j=1}^infty beta_{jx} u_{jx}^{n} approx B_{x} r{1x}^{n} $$ where ${ u_{jx}}$ are the eigenvalues of a certain matrix, $r_{1x}$ is the maximum magnitude of the eigenvalues, and $I$ depends on the number of possible values of the underlying random variables. The eigenvalues do not depend on $x$ only on its range.
Let $X_{nr}$ be the $r$th largest of a random sample of size $n$ from a distribution $F (x) = 1 - sum_{i = 0}^infty c_i x^{-alpha - i beta}$ for $alpha > 0$ and $beta > 0$. An inversion theorem is proved and used to derive an expansion for the quanti le $F^{-1} (u)$ and powers of it. From this an expansion in powers of $(n^{-1}, n^{-beta/alpha})$ is given for the multivariate moments of the extremes ${X_{n, n - s_i}, 1 leq i leq k }/n^{1/alpha}$ for fixed ${bf s} = (s_1, ..., s_k)$, where $k geq 1$. Examples include the Cauchy, Student $t$, $F$, second extreme distributions and stable laws of index $alpha < 1$.
We give analytic methods for nonparametric bias reduction that remove the need for computationally intensive methods like the bootstrap and the jackknife. We call an estimate {it $p$th order} if its bias has magnitude $n_0^{-p}$ as $n_0 to infty$, where $n_0$ is the sample size (or the minimum sample size if the estimate is a function of more than one sample). Most estimates are only first order and require O(N) calculations, where $N$ is the total sample size. The usual bootstrap and jackknife estimates are second order but they are computationally intensive, requiring $O(N^2)$ calculations for one sample. By contrast Jaeckels infinitesimal jackknife is an analytic second order one sample estimate requiring only O(N) calculations. When $p$th order bootstrap and jackknife estimates are available, they require $O(N^p)$ calculations, and so become even more computationally intensive if one chooses $p>2$. For general $p$ we provide analytic $p$th order nonparametric estimates that require only O(N) calculations. Our estimates are given in terms of the von Mises derivatives of the functional being estimated, evaluated at the empirical distribution. For products of moments an unbiased estimate exists: our form for this polykay is much simpler than the usual form in terms of power sums.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا