ترغب بنشر مسار تعليمي؟ اضغط هنا

Stable Principal Component Pursuit

134   0   0.0 ( 0 )
 نشر من قبل Zihan Zhou
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.



قيم البحث

اقرأ أيضاً

The computation of the sparse principal component of a matrix is equivalent to the identification of its principal submatrix with the largest maximum eigenvalue. Finding this optimal submatrix is what renders the problem ${mathcal{NP}}$-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity ${mathcal O}left(N^{D+1}right)$, where $N$ and $D$ are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.
Channel-state-information (CSI) feedback methods are considered, especially for massive or very large-scale multiple-input multiple-output (MIMO) systems. To extract essential information from the CSI without redundancy that arises from the highly co rrelated antennas, a receiver transforms (sparsifies) a correlated CSI vector to an uncorrelated sparse CSI vector by using a Karhunen-Loeve transform (KLT) matrix that consists of the eigen vectors of covariance matrix of CSI vector and feeds back the essential components of the sparse CSI, i.e., a principal component analysis method. A transmitter then recovers the original CSI through the inverse transformation of the feedback vector. Herein, to obtain the covariance matrix at transceiver, we derive analytically the covariance matrix of spatially correlated Rayleigh fading channels based on its statistics including transmit antennas and receive antennas correlation matrices, channel variance, and channel delay profile. With the knowledge of the channel statistics, the transceiver can readily obtain the covariance matrix and KLT matrix. Compression feedback error and bit-error-rate performance of the proposed method are analyzed. Numerical results verify that the proposed method is promising, which reduces significantly the feedback overhead of the massive-MIMO systems with marginal performance degradation from full-CSI feedback (e.g., feedback amount reduction by 80%, i.e., 1/5 of original CSI, with spectral efficiency reduction by only 2%). Furthermore, we show numerically that, for a given limited feedback amount, we can find the optimal number of transmit antennas to achieve the largest spectral efficiency, which is a new design framework.
Recent methods for learning a linear subspace from data corrupted by outliers are based on convex $ell_1$ and nuclear norm optimization and require the dimension of the subspace and the number of outliers to be sufficiently small. In sharp contrast, the recently proposed Dual Principal Component Pursuit (DPCP) method can provably handle subspaces of high dimension by solving a non-convex $ell_1$ optimization problem on the sphere. However, its geometric analysis is based on quantities that are difficult to interpret and are not amenable to statistical analysis. In this paper we provide a refined geometric analysis and a new statistical analysis that show that DPCP can tolerate as many outliers as the square of the number of inliers, thus improving upon other provably correct robust PCA methods. We also propose a scalable Projected Sub-Gradient Method method (DPCP-PSGM) for solving the DPCP problem and show it admits linear convergence even though the underlying optimization problem is non-convex and non-smooth. Experiments on road plane detection from 3D point cloud data demonstrate that DPCP-PSGM can be more efficient than the traditional RANSAC algorithm, which is one of the most popular methods for such computer vision applications.
In this paper, we put forth a new joint sparse recovery algorithm called signal space matching pursuit (SSMP). The key idea of the proposed SSMP algorithm is to sequentially investigate the support of jointly sparse vectors to minimize the subspace d istance to the residual space. Our performance guarantee analysis indicates that SSMP accurately reconstructs any row $K$-sparse matrix of rank $r$ in the full row rank scenario if the sampling matrix $mathbf{A}$ satisfies $text{krank}(mathbf{A}) ge K+1$, which meets the fundamental minimum requirement on $mathbf{A}$ to ensure exact recovery. We also show that SSMP guarantees exact reconstruction in at most $K-r+lceil frac{r}{L} rceil$ iterations, provided that $mathbf{A}$ satisfies the restricted isometry property (RIP) of order $L(K-r)+r+1$ with $$delta_{L(K-r)+r+1} < max left { frac{sqrt{r}}{sqrt{K+frac{r}{4}}+sqrt{frac{r}{4}}}, frac{sqrt{L}}{sqrt{K}+1.15 sqrt{L}} right },$$ where $L$ is the number of indices chosen in each iteration. This implies that the requirement on the RIP constant becomes less restrictive when $r$ increases. Such behavior seems to be natural but has not been reported for most of conventional methods. We further show that if $r=1$, then by running more than $K$ iterations, the performance guarantee of SSMP can be improved to $delta_{lfloor 7.8K rfloor} le 0.155$. In addition, we show that under a suitable RIP condition, the reconstruction error of SSMP is upper bounded by a constant multiple of the noise power, which demonstrates the stability of SSMP under measurement noise. Finally, from extensive numerical experiments, we show that SSMP outperforms conventional joint sparse recovery algorithms both in noiseless and noisy scenarios.
We consider the problem of identifying the sparse principal component of a rank-deficient matrix. We introduce auxiliary spherical variables and prove that there exists a set of candidate index-sets (that is, sets of indices to the nonzero elements o f the vector argument) whose size is polynomially bounded, in terms of rank, and contains the optimal index-set, i.e. the index-set of the nonzero elements of the optimal solution. Finally, we develop an algorithm that computes the optimal sparse principal component in polynomial time for any sparsity degree.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا