ترغب بنشر مسار تعليمي؟ اضغط هنا

Geodesic Convexity and Regularized Scatter Estimators

133   0   0.0 ( 0 )
 نشر من قبل Lutz Duembgen
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

As observed by Auderset et al. (2005) and Wiesel (2012), viewing covariance matrices as elements of a Riemannian manifold and using the concept of geodesic convexity provide useful tools for studying M-estimators of multivariate scatter. In this paper, we begin with a mathematically rigorous self-contained overview of Riemannian geometry on the space of symmetric positive definite matrices and of the notion of geodesic convexity. The overview contains both a review as well as new results. In particular, we introduce and utilize first and second order Taylor expansions with respect to geodesic parametrizations. This enables us to give sufficient conditions for a function to be geodesically convex. In addition, we introduce the concept of geodesic coercivity, which is important in establishing the existence of a minimum to a geodesic convex function. We also develop a general partial Newton algorithm for minimizing smooth and strictly geodesically convex functions. We then use these results to generate a fairly complete picture of the existence, uniqueness and computation of regularized M-estimators of scatter defined using additive geodescially convex penalty terms. Various such penalties are demonstrated which shrink an estimator towards the identity matrix or multiples of the identity matrix. Finally, we propose a cross-validation method for choosing the scaling parameter for the penalty function, and illustrate our results using a numerical example.



قيم البحث

اقرأ أيضاً

74 - Z.L. Wang 2021
We investigate a particular regularization of big bang singularity, which remains within the domain of 4-dimensional general relativity but allowing for degenerate metrics. We study the geodesics and geodesic congruences in the modified Friedmann-Lem a^itre-Robertson-Walker universe. In particular, we calculate the expansion of timelike and null geodesic congruences. Based on these results, we also briefly discuss the cosmological singularity theorems.
128 - Yoav Benjamini , Amit Meir 2014
The problem of Voodoo correlations is recognized in neuroimaging as the problem of estimating quantities of interest from the same data that was used to select them as interesting. In statistical terminology, the problem of inference following select ion from the same data is that of selective inference. Motivated by the unwelcome side-effects of the recommended remedy- splitting the data. A method for constructing confidence intervals based on the correct post-selection distribution of the observations has been suggested recently. We utilize a similar approach in order to provide point estimates that account for a large part of the selection bias. We show via extensive simulations that the proposed estimator has favorable properties, namely, that it is likely to reduce estimation bias and the mean squared error compared to the direct estimator without sacrificing power to detect non-zero correlation as in the case of the data splitting approach. We show that both point estimates and confidence intervals are needed in order to get a full assessment of the uncertainty in the point estimates as both are integrated into the Confidence Calibration Plots proposed recently. The computation of the estimators is implemented in an accompanying software package.
Regression trees and their ensemble methods are popular methods for nonparametric regression: they combine strong predictive performance with interpretable estimators. To improve their utility for locally smooth response surfaces, we study regression trees and random forests with linear aggregation functions. We introduce a new algorithm that finds the best axis-aligned split to fit linear aggregation functions on the corresponding nodes, and we offer a quasilinear time implementation. We demonstrate the algorithms favorable performance on real-world benchmarks and in an extensive simulation study, and we demonstrate its improved interpretability using a large get-out-the-vote experiment. We provide an open-source software package that implements several tree-based estimators with linear aggregation functions.
A robust estimator is proposed for the parameters that characterize the linear regression problem. It is based on the notion of shrinkages, often used in Finance and previously studied for outlier detection in multivariate data. A thorough simulation study is conducted to investigate: the efficiency with normal and heavy-tailed errors, the robustness under contamination, the computational times, the affine equivariance and breakdown value of the regression estimator. Two classical data-sets often used in the literature and a real socio-economic data-set about the Living Environment Deprivation of areas in Liverpool (UK), are studied. The results from the simulations and the real data examples show the advantages of the proposed robust estimator in regression.
Two-stage least squares (TSLS) estimators and variants thereof are widely used to infer the effect of an exposure on an outcome using instrumental variables (IVs). They belong to a wider class of two-stage IV estimators, which are based on fitting a conditional mean model for the exposure, and then using the fitted exposure values along with the covariates as predictors in a linear model for the outcome. We show that standard TSLS estimators enjoy greater robustness to model misspecification than more general two-stage estimators. However, by potentially using a wrong exposure model, e.g. when the exposure is binary, they tend to be inefficient. In view of this, we study double-robust G-estimators instead. These use working models for the exposure, IV and outcome but only require correct specification of either the IV model or the outcome model to guarantee consistent estimation of the exposure effect. As the finite sample performance of the locally efficient G-estimator can be poor, we further develop G-estimation procedures with improved efficiency and robustness properties under misspecification of some or all working models. Simulation studies and a data analysis demonstrate drastic improvements, with remarkably good performance even when one or more working models are misspecified.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا