ﻻ يوجد ملخص باللغة العربية
We revisit the problem of simultaneously testing the means of $n$ independent normal observations under sparsity. We take a Bayesian approach to this problem by introducing a scale-mixture prior known as the normal-beta prime (NBP) prior. We first derive new concentration properties when the beta prime density is employed for a scale parameter in Bayesian hierarchical models. To detect signals in our data, we then propose a hypothesis test based on thresholding the posterior shrinkage weight under the NBP prior. Taking the loss function to be the expected number of misclassified tests, we show that our test procedure asymptotically attains the optimal Bayes risk when the signal proportion $p$ is known. When $p$ is unknown, we introduce an empirical Bayes variant of our test which also asymptotically attains the Bayes Oracle risk in the entire range of sparsity parameters $p propto n^{-epsilon}, epsilon in (0, 1)$. Finally, we also consider restricted marginal maximum likelihood (REML) and hierarchical Bayes approaches for estimating a key hyperparameter in the NBP prior and examine multiple testing under these frameworks.
We study high-dimensional Bayesian linear regression with a general beta prime distribution for the scale parameter. Under the assumption of sparsity, we show that appropriate selection of the hyperparameters in the beta prime prior leads to the (nea
Standardization has been a widely adopted practice in multiple testing, for it takes into account the variability in sampling and makes the test statistics comparable across different study units. However, despite conventional wisdom to the contrary,
Large-scale multiple testing is a fundamental problem in high dimensional statistical inference. It is increasingly common that various types of auxiliary information, reflecting the structural relationship among the hypotheses, are available. Exploi
We study the well-known problem of estimating a sparse $n$-dimensional unknown mean vector $theta = (theta_1, ..., theta_n)$ with entries corrupted by Gaussian white noise. In the Bayesian framework, continuous shrinkage priors which can be expressed
We present a general framework for hypothesis testing on distributions of sets of individual examples. Sets may represent many common data sources such as groups of observations in time series, collections of words in text or a batch of images of a g