ترغب بنشر مسار تعليمي؟ اضغط هنا

A Generalized Savage-Dickey Ratio

62   0   0.0 ( 0 )
 نشر من قبل Ewan Cameron Dr
 تاريخ النشر 2013
والبحث باللغة English
 تأليف Ewan Cameron




اسأل ChatGPT حول البحث

In this brief research note I present a generalized version of the Savage-Dickey Density Ratio for representation of the Bayes factor (or marginal likelihood ratio) of nested statistical models; the new version takes the form of a Radon-Nikodym derivative and is thus applicable to a wider family of probability spaces than the original (restricted to those admitting an ordinary Lebesgue density). A derivation is given following the measure-theoretic construction of Marin & Robert (2010), and the equivalent estimator is demonstrated in application to a distributional modeling problem.

قيم البحث

اقرأ أيضاً

175 - Charpentier , Arthur , Mussard 2019
A principal component analysis based on the generalized Gini correlation index is proposed (Gini PCA). The Gini PCA generalizes the standard PCA based on the variance. It is shown, in the Gaussian case, that the standard PCA is equivalent to the Gini PCA. It is also proven that the dimensionality reduction based on the generalized Gini correlation matrix, that relies on city-block distances, is robust to outliers. Monte Carlo simulations and an application on cars data (with outliers) show the robustness of the Gini PCA and provide different interpretations of the results compared with the variance PCA.
In this paper, we show that the likelihood-ratio measure (a) is invariant with respect to dominating sigma-finite measures, (b) satisfies logical consequences which are not satisfied by standard $p$-values, (c) respects frequentist properties, i.e., the type I error can be properly controlled, and, under mild regularity conditions, (d) can be used as an upper bound for posterior probabilities. We also discuss a generic application to test whether the genotype frequencies of a given population are under the Hardy-Weinberg equilibrium, under inbreeding restrictions or under outbreeding restrictions.
The likelihood ratio test (LRT) based on the asymptotic chi-squared distribution of the log likelihood is one of the fundamental tools of statistical inference. A recent universal LRT approach based on sample splitting provides valid hypothesis tests and confidence sets in any setting for which we can compute the split likelihood ratio statistic (or, more generally, an upper bound on the null maximum likelihood). The universal LRT is valid in finite samples and without regularity conditions. This test empowers statisticians to construct tests in settings for which no valid hypothesis test previously existed. For the simple but fundamental case of testing the population mean of d-dimensional Gaussian data, the usual LRT itself applies and thus serves as a perfect test bed to compare against the universal LRT. This work presents the first in-depth exploration of the size, power, and relationships between several universal LRT variants. We show that a repeated subsampling approach is the best choice in terms of size and power. We observe reasonable performance even in a high-dimensional setting, where the expected squared radius of the best universal LRT confidence set is approximately 3/2 times the squared radius of the standard LRT-based set. We illustrate the benefits of the universal LRT through testing a non-convex doughnut-shaped null hypothesis, where a universal inference procedure can have higher power than a standard approach.
58 - S. Riggi , D. Riggi , F. Riggi 2020
Identification of charged particles in a multilayer detector by the energy loss technique may also be achieved by the use of a neural network. The performance of the network becomes worse when a large fraction of information is missing, for instance due to detector inefficiencies. Algorithms which provide a way to impute missing information have been developed over the past years. Among the various approaches, we focused on normal mixtures models in comparison with standard mean imputation and multiple imputation methods. Further, to account for the intrinsic asymmetry of the energy loss data, we considered skew-normal mixture models and provided a closed form implementation in the Expectation-Maximization (EM) algorithm framework to handle missing patterns. The method has been applied to a test case where the energy losses of pions, kaons and protons in a six-layers Silicon detector are considered as input neurons to a neural network. Results are given in terms of reconstruction efficiency and purity of the various species in different momentum bins.
We propose and analyze a generalized splitting method to sample approximately from a distribution conditional on the occurrence of a rare event. This has important applications in a variety of contexts in operations research, engineering, and computa tional statistics. The method uses independent trials starting from a single particle. We exploit this independence to obtain asymptotic and non-asymptotic bounds on the total variation error of the sampler. Our main finding is that the approximation error depends crucially on the relative variability of the number of points produced by the splitting algorithm in one run, and that this relative variability can be readily estimated via simulation. We illustrate the relevance of the proposed method on an application in which one needs to sample (approximately) from an intractable posterior density in Bayesian inference.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا