ﻻ يوجد ملخص باللغة العربية
Estimation of a precision matrix (i.e., inverse covariance matrix) is widely used to exploit conditional independence among continuous variables. The influence of abnormal observations is exacerbated in a high dimensional setting as the dimensionality increases. In this work, we propose robust estimation of the inverse covariance matrix based on an $l_1$ regularized objective function with a weighted sample covariance matrix. The robustness of the proposed objective function can be justified by a nonparametric technique of the integrated squared error criterion. To address the non-convexity of the objective function, we develop an efficient algorithm in a similar spirit of majorization-minimization. Asymptotic consistency of the proposed estimator is also established. The performance of the proposed method is compared with several existing approaches via numerical simulations. We further demonstrate the merits of the proposed method with application in genetic network inference.
Inverse probability weighted estimators are the oldest and potentially most commonly used class of procedures for the estimation of causal effects. By adjusting for selection biases via a weighting mechanism, these procedures estimate an effect of in
The article considers the problem of estimating a high-dimensional sparse parameter in the presence of side information that encodes the sparsity structure. We develop a general framework that involves first using an auxiliary sequence to capture the
We propose a novel approach to estimating the precision matrix of multivariate Gaussian data that relies on decomposing them into a low-rank and a diagonal component. Such decompositions are very popular for modeling large covariance matrices as they
A nonparametric Bayes approach is proposed for the problem of estimating a sparse sequence based on Gaussian random variables. We adopt the popular two-group prior with one component being a point mass at zero, and the other component being a mixture
Many Machine Learning algorithms are formulated as regularized optimization problems, but their performance hinges on a regularization parameter that needs to be calibrated to each application at hand. In this paper, we propose a general calibration