Do you want to publish a course? Click here

Depth statistics

240   0   0.0 ( 0 )
 Added by Pavel Bazovkin
 Publication date 2012
and research's language is English
 Authors Karl Mosler




Ask ChatGPT about the research

In 1975 John Tukey proposed a multivariate median which is the deepest point in a given data cloud in R^d. Later, in measuring the depth of an arbitrary point z with respect to the data, David Donoho and Miriam Gasko considered hyperplanes through z and determined its depth by the smallest portion of data that are separated by such a hyperplane. Since then, these ideas has proved extremely fruitful. A rich statistical methodology has developed that is based on data depth and, more general, nonparametric depth statistics. General notions of data depth have been introduced as well as many special ones. These notions vary regarding their computability and robustness and their sensitivity to reflect asymmetric shapes of the data. According to their different properties they fit to particular applications. The upper level sets of a depth statistic provide a family of set-valued statistics, named depth-trimmed or central regions. They describe the distribution regarding its location, scale and shape. The most central region serves as a median. The notion of depth has been extended from data clouds, that is empirical distributions, to general probability distributions on R^d, thus allowing for laws of large numbers and consistency results. It has also been extended from d-variate data to data in functional spaces.



rate research

Read More

A novel framework for the analysis of observation statistics on time discrete linear evolutions in Banach space is presented. The model differs from traditional models for stochastic processes and, in particular, clearly distinguishes between the deterministic evolution of a system and the stochastic nature of observations on the evolving system. General Markov chains are defined in this context and it is shown how typical traditional models of classical or quantum random walks and Markov processes fit into the framework and how a theory of quantum statistics ({it sensu} Barndorff-Nielsen, Gill and Jupp) may be developed from it. The framework permits a general theory of joint observability of two or more observation variables which may be viewed as an extension of the Heisenberg uncertainty principle and, in particular, offers a novel mathematical perspective on the violation of Bells inequalities in quantum models. Main results include a general sampling theorem relative to Riesz evolution operators in the spirit of von Neumanns mean ergodic theorem for normal operators in Hilbert space.
188 - Satoshi Aoki 2016
In this paper, we introduce the fundamental notion of a Markov basis, which is one of the first connections between commutative algebra and statistics. The notion of a Markov basis is first introduced by Diaconis and Sturmfels (1998) for conditional testing problems on contingency tables by Markov chain Monte Carlo methods. In this method, we make use of a connected Markov chain over the given conditional sample space to estimate the P-values numerically for various conditional tests. A Markov basis plays an importance role in this arguments, because it guarantees the connectivity of the chain, which is needed for unbiasedness of the estimate, for arbitrary conditional sample space. As another important point, a Markov basis is characterized as generators of the well-specified toric ideals of polynomial rings. This connection between commutative algebra and statistics is the main result of Diaconis and Sturmfels (1998). After this first paper, a Markov basis is studied intensively by many researchers both in commutative algebra and statistics, which yields an attractive field called computational algebraic statistics. In this paper, we give a review of the Markov chain Monte Carlo methods for contingency tables and Markov bases, with some fundamental examples. We also give some computational examples by algebraic software Macaulay2 and statistical software R. Readers can also find theoretical details of the problems considered in this paper and various results on the structure and examples of Markov bases in Aoki, Hara and Takemura (2012).
Bootstrap for nonlinear statistics like U-statistics of dependent data has been studied by several authors. This is typically done by producing a bootstrap version of the sample and plugging it into the statistic. We suggest an alternative approach of getting a bootstrap version of U-statistics, which can be described as a compromise between bootstrap and subsampling. We will show the consistency of the new method and compare its finite sample properties in a simulation study.
Wasserstein geometry and information geometry are two important structures to be introduced in a manifold of probability distributions. Wasserstein geometry is defined by using the transportation cost between two distributions, so it reflects the metric of the base manifold on which the distributions are defined. Information geometry is defined to be invariant under reversible transformations of the base space. Both have their own merits for applications. In particular, statistical inference is based upon information geometry, where the Fisher metric plays a fundamental role, whereas Wasserstein geometry is useful in computer vision and AI applications. In this study, we analyze statistical inference based on the Wasserstein geometry in the case that the base space is one-dimensional. By using the location-scale model, we further derive the W-estimator that explicitly minimizes the transportation cost from the empirical distribution to a statistical model and study its asymptotic behaviors. We show that the W-estimator is consistent and explicitly give its asymptotic distribution by using the functional delta method. The W-estimator is Fisher efficient in the Gaussian case.
We review a finite-sampling exponential bound due to Serfling and discuss related exponential bounds for the hypergeometric distribution. We then discuss how such bounds motivate some new results for two-sample empirical processes. Our development complements recent results by Wei and Dudley (2011) concerning exponential bounds for two-sided Kolmogorov - Smirnov statistics by giving corresponding results for one-sided statistics with emphasis on adjusted inequalities of the type proved originally by Dvoretzky, Kiefer, and Wolfowitz (1956) and by Massart (1990) for one-samp
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا