ترغب بنشر مسار تعليمي؟ اضغط هنا

A General Approach for Simulation-based Bias Correction in High Dimensional Settings

86   0   0.0 ( 0 )
 نشر من قبل Samuel Orso
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

An important challenge in statistical analysis lies in controlling the bias of estimators due to the ever-increasing data size and model complexity. Approximate numerical methods and data features like censoring and misclassification often result in analytical and/or computational challenges when implementing standard estimators. As a consequence, consistent estimators may be difficult to obtain, especially in complex and/or high dimensional settings. In this paper, we study the properties of a general simulation-based estimation framework that allows to construct bias corrected consistent estimators. We show that the considered approach leads, under more general conditions, to stronger bias correction properties compared to alternative methods. Besides its bias correction advantages, the considered method can be used as a simple strategy to construct consistent estimators in settings where alternative methods may be challenging to apply. Moreover, the considered framework can be easily implemented and is computationally efficient. These theoretical results are highlighted with simulation studies of various commonly used models, including the negative binomial regression (with and without censoring) and the logistic regression (with and without misclassification errors). Additional numerical illustrations are provided in the supplementary materials.

قيم البحث

اقرأ أيضاً

We develop a Bayesian variable selection method, called SVEN, based on a hierarchical Gaussian linear model with priors placed on the regression coefficients as well as on the model space. Sparsity is achieved by using degenerate spike priors on inac tive variables, whereas Gaussian slab priors are placed on the coefficients for the important predictors making the posterior probability of a model available in explicit form (up to a normalizing constant). The strong model selection consistency is shown to be attained when the number of predictors grows nearly exponentially with the sample size and even when the norm of mean effects solely due to the unimportant variables diverge, which is a novel attractive feature. An appealing byproduct of SVEN is the construction of novel model weight adjusted prediction intervals. Embedding a unique model based screening and using fast Cholesky updates, SVEN produces a highly scalable computational framework to explore gigantic model spaces, rapidly identify the regions of high posterior probabilities and make fast inference and prediction. A temperature schedule guided by our model selection consistency derivations is used to further mitigate multimodal posterior distributions. The performance of SVEN is demonstrated through a number of simulation experiments and a real data example from a genome wide association study with over half a million markers.
91 - Yufei Yi , Matey Neykov 2020
In this paper, we propose a cone projected power iteration algorithm to recover the first principal eigenvector from a noisy positive semidefinite matrix. When the true principal eigenvector is assumed to belong to a convex cone, the proposed algorit hm is fast and has a tractable error. Specifically, the method achieves polynomial time complexity for certain convex cones equipped with fast projection such as the monotone cone. It attains a small error when the noisy matrix has a small cone-restricted operator norm. We supplement the above results with a minimax lower bound of the error under the spiked covariance model. Our numerical experiments on simulated and real data, show that our method achieves shorter run time and smaller error in comparison to the ordinary power iteration and some sparse principal component analysis algorithms if the principal eigenvector is in a convex cone.
While there is considerable work on change point analysis in univariate time series, more and more data being collected comes from high dimensional multivariate settings. This paper introduces the asymptotic concept of high dimensional efficiency whi ch quantifies the detection power of different statistics in such situations. While being related to classic asymptotic relative efficiency, it is different in that it provides the rate at which the change can get smaller with dimension while still being detectable. This also allows for comparisons of different methods with different null asymptotics as is for example the case in high-dimensional change point settings. Based on this new concept we investigate change point detection procedures using projections and develop asymptotic theory for how full panel (multivariate) tests compare with both oracle and random projections. Furthermore, for each given projection we can quantify a cone such that the corresponding projection statistic yields better power behavior if the true change direction is within this cone. The effect of misspecification of the covariance on the power of the tests is investigated, because in many high dimensional situations estimation of the full dependency (covariance) between the multivariate observations in the panel is often either computationally or even theoretically infeasible. It turns out that the projection statistic is much more robust in this respect in terms of size and somewhat more robust in terms of power. The theoretic quantification by the theory is accompanied by simulation results which confirm the theoretic (asymptotic) findings for surprisingly small samples. This shows in particular that the concept of high dimensional efficiency is indeed suitable to describe small sample power, and this is demonstrated in a multivariate example of market index data.
Functional Magnetic Resonance Imaging (fMRI) maps cerebral activation in response to stimuli but this activation is often difficult to detect, especially in low-signal contexts and single-subject studies. Accurate activation detection can be guided b y the fact that very few voxels are, in reality, truly activated and that activated voxels are spatially localized, but it is challenging to incorporate both these facts. We provide a computationally feasible and methodologically sound model-based approach, implemented in the R package MixfMRI, that bounds the a priori expected proportion of activated voxels while also incorporating spatial context. Results on simulation experiments for different levels of activation detection difficulty are uniformly encouraging. The value of the methodology in low-signal and single-subject fMRI studies is illustrated on a sports imagination experiment. Concurrently, we also extend the potential use of fMRI as a clinical tool to, for example, detect awareness and improve treatment in individual patients in persistent vegetative state, such as traumatic brain injury survivors.
The problem of estimating ARMA models is computationally interesting due to the nonconcavity of the log-likelihood function. Recent results were based on the convex minimization. Joint model selection using penalization by a convex norm, e.g. the nuc lear norm of a certain matrix related to the state space formulation was extensively studied from a computational viewpoint. The goal of the present short note is to present a theoretical study of a nuclear norm penalization based variant of the method of cite{Bauer:Automatica05,Bauer:EconTh05} under the assumption of a Gaussian noise process.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا