No Arabic abstract
We propose a new adaptive empirical Bayes framework, the Bag-Of-Null-Statistics (BONuS) procedure, for multiple testing where each hypothesis testing problem is itself multivariate or nonparametric. BONuS is an adaptive and interactive knockoff-type method that helps improve the testing power while controlling the false discovery rate (FDR), and is closely connected to the counting knockoffs procedure analyzed in Weinstein et al. (2017). Contrary to procedures that start with a $p$-value for each hypothesis, our method analyzes the entire data set to adaptively estimate an optimal $p$-value transform based on an empirical Bayes model. Despite the extra adaptivity, our method controls FDR in finite samples even if the empirical Bayes model is incorrect or the estimation is poor. An extension, the Double BONuS procedure, validates the empirical Bayes model to guard against power loss due to model misspecification.
Standardization has been a widely adopted practice in multiple testing, for it takes into account the variability in sampling and makes the test statistics comparable across different study units. However, despite conventional wisdom to the contrary, we show that there can be a significant loss in information from basing hypothesis tests on standardized statistics rather than the full data. We develop a new class of heteroscedasticity--adjusted ranking and thresholding (HART) rules that aim to improve existing methods by simultaneously exploiting commonalities and adjusting heterogeneities among the study units. The main idea of HART is to bypass standardization by directly incorporating both the summary statistic and its variance into the testing procedure. A key message is that the variance structure of the alternative distribution, which is subsumed under standardized statistics, is highly informative and can be exploited to achieve higher power. The proposed HART procedure is shown to be asymptotically valid and optimal for false discovery rate (FDR) control. Our simulation results demonstrate that HART achieves substantial power gain over existing methods at the same FDR level. We illustrate the implementation through a microarray analysis of myeloma.
Modeling of longitudinal data often requires diffusion models that incorporate overall time-dependent, nonlinear dynamics of multiple components and provide sufficient flexibility for subject-specific modeling. This complexity challenges parameter inference and approximations are inevitable. We propose a method for approximate maximum-likelihood parameter estimation in multivariate time-inhomogeneous diffusions, where subject-specific flexibility is accounted for by incorporation of multidimensional mixed effects and covariates. We consider $N$ multidimensional independent diffusions $X^i = (X^i_t)_{0leq tleq T^i}, 1leq ileq N$, with common overall model structure and unknown fixed-effects parameter $mu$. Their dynamics differ by the subject-specific random effect $phi^i$ in the drift and possibly by (known) covariate information, different initial conditions and observation times and duration. The distribution of $phi^i$ is parametrized by an unknown $vartheta$ and $theta = (mu, vartheta)$ is the target of statistical inference. Its maximum likelihood estimator is derived from the continuous-time likelihood. We prove consistency and asymptotic normality of $hat{theta}_N$ when the number $N$ of subjects goes to infinity using standard techniques and consider the more general concept of local asymptotic normality for less regular models. The bias induced by time-discretization of sufficient statistics is investigated. We discuss verification of conditions and investigate parameter estimation and hypothesis testing in simulations.
Gaussian graphical models (GGMs) are well-established tools for probabilistic exploration of dependence structures using precision matrices. We develop a Bayesian method to incorporate covariate information in this GGMs setup in a nonlinear seemingly unrelated regression framework. We propose a joint predictor and graph selection model and develop an efficient collapsed Gibbs sampler algorithm to search the joint model space. Furthermore, we investigate its theoretical variable selection properties. We demonstrate our method on a variety of simulated data, concluding with a real data set from the TCPA project.
Advancement in technology has generated abundant high-dimensional data that allows integration of multiple relevant studies. Due to their huge computational advantage, variable screening methods based on marginal correlation have become promising alternatives to the popular regularization methods for variable selection. However, all these screening methods are limited to single study so far. In this paper, we consider a general framework for variable screening with multiple related studies, and further propose a novel two-step screening procedure using a self-normalized estimator for high-dimensional regression analysis in this framework. Compared to the one-step procedure and rank-based sure independence screening (SIS) procedure, our procedure greatly reduces false negative errors while keeping a low false positive rate. Theoretically, we show that our procedure possesses the sure screening property with weaker assumptions on signal strengths and allows the number of features to grow at an exponential rate of the sample size. In addition, we relax the commonly used normality assumption and allow sub-Gaussian distributions. Simulations and a real transcriptomic application illustrate the advantage of our method as compared to the rank-based SIS method.
In this paper, a new mixture family of multivariate normal distributions, formed by mixing multivariate normal distribution and skewed distribution, is constructed. Some properties of this family, such as characteristic function, moment generating function, and the first four moments are derived. The distributions of affine transformations and canonical forms of the model are also derived. An EM type algorithm is developed for the maximum likelihood estimation of model parameters. We have considered in detail, some special cases of the family, using standard gamma and standard exponential mixture distributions, denoted by MMNG and MMNE, respectively. For the proposed family of distributions, different multivariate measures of skewness are computed. In order to examine the performance of the developed estimation method, some simulation studies are carried out to show that the maximum likelihood estimates based on the EM type algorithm do provide good performance. For different choices of parameters of MMNE distribution, several multivariate measures of skewness are computed and compared. Because some measures of skewness are scalar and some are vectors, in order to evaluate them properly, we have carried out a simulation study to determine the power of tests, based on samp