ترغب بنشر مسار تعليمي؟ اضغط هنا

The Global Geometry of Centralized and Distributed Low-rank Matrix Recovery without Regularization

91   0   0.0 ( 0 )
 نشر من قبل Shuang Li
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Low-rank matrix recovery is a fundamental problem in signal processing and machine learning. A recent very popular approach to recovering a low-rank matrix X is to factorize it as a product of two smaller matrices, i.e., X = UV^T, and then optimize over U, V instead of X. Despite the resulting non-convexity, recent results have shown that many factorized objective functions actually have benign global geometry---with no spurious local minima and satisfying the so-called strict saddle property---ensuring convergence to a global minimum for many local-search algorithms. Such results hold whenever the original objective function is restricted strongly convex and smooth. However, most of these results actually consider a modified cost function that includes a balancing regularizer. While useful for deriving theory, this balancing regularizer does not appear to be necessary in practice. In this work, we close this theory-practice gap by proving that the unaltered factorized non-convex problem, without the balancing regularizer, also has similar benign global geometry. Moreover, we also extend our theoretical results to the field of distributed optimization.



قيم البحث

اقرأ أيضاً

211 - Pan Shang , Lingchen Kong 2019
Low rank matrix recovery is the focus of many applications, but it is a NP-hard problem. A popular way to deal with this problem is to solve its convex relaxation, the nuclear norm regularized minimization problem (NRM), which includes LASSO as a spe cial case. There are some regularization parameter selection results for LASSO in vector case, such as screening rules, which improve the efficiency of the algorithms. However, there are no corresponding parameter selection results for NRM in matrix case. In this paper, we build up a novel rule to choose the regularization parameter for NRM under the help of duality theory. This rule claims that the regularization parameter can be easily chosen by feasible points of NRM and its dual problem, when the rank of the desired solution is no more than a given constant. In particular, we apply this idea to NRM with least square and Huber functions, and establish the easily calculated formula of regularization parameters. Finally, we report numerical results on some signal shapes, which state that our proposed rule shrinks the interval of the regularization parameter efficiently.
We study the convergence of a variant of distributed gradient descent (DGD) on a distributed low-rank matrix approximation problem wherein some optimization variables are used for consensus (as in classical DGD) and some optimization variables appear only locally at a single node in the network. We term the resulting algorithm DGD+LOCAL. Using algorithmic connections to gradient descent and geometric connections to the well-behaved landscape of the centralized low-rank matrix approximation problem, we identify sufficient conditions where DGD+LOCAL is guaranteed to converge with exact consensus to a global minimizer of the original centralized problem. For the distributed low-rank matrix approximation problem, these guarantees are stronger---in terms of consensus and optimality---than what appear in the literature for classical DGD and more general problems.
80 - Richard Y. Zhang 2021
We prove that it is possible for nonconvex low-rank matrix recovery to contain no spurious local minima when the rank of the unknown ground truth $r^{star}<r$ is strictly less than the search rank $r$, and yet for the claim to be false when $r^{star} =r$. Under the restricted isometry property (RIP), we prove, for the general overparameterized regime with $r^{star}le r$, that an RIP constant of $delta<1/(1+sqrt{r^{star}/r})$ is sufficient for the inexistence of spurious local minima, and that $delta<1/(1+1/sqrt{r-r^{star}+1})$ is necessary due to existence of counterexamples. Without an explicit control over $r^{star}le r$, an RIP constant of $delta<1/2$ is both necessary and sufficient for the exact recovery of a rank-$r$ ground truth. But if the ground truth is known a priori to have $r^{star}=1$, then the sharp RIP threshold for exact recovery is improved to $delta<1/(1+1/sqrt{r})$.
This paper develops a new class of nonconvex regularizers for low-rank matrix recovery. Many regularizers are motivated as convex relaxations of the matrix rank function. Our new factor group-sparse regularizers are motivated as a relaxation of the n umber of nonzero columns in a factorization of the matrix. These nonconvex regularizers are sharper than the nuclear norm; indeed, we show they are related to Schatten-$p$ norms with arbitrarily small $0 < p leq 1$. Moreover, these factor group-sparse regularizers can be written in a factored form that enables efficient and effective nonconvex optimization; notably, the method does not use singular value decomposition. We provide generalization error bounds for low-rank matrix completion which show improved upper bounds for Schatten-$p$ norm reglarization as $p$ decreases. Compared to the max norm and the factored formulation of the nuclear norm, factor group-sparse regularizers are more efficient, accurate, and robust to the initial guess of rank. Experiments show promising performance of factor group-sparse regularization for low-rank matrix completion and robust principal component analysis.
The problem of recovering a low-rank matrix from the linear constraints, known as affine matrix rank minimization problem, has been attracting extensive attention in recent years. In general, affine matrix rank minimization problem is a NP-hard. In o ur latest work, a non-convex fraction function is studied to approximate the rank function in affine matrix rank minimization problem and translate the NP-hard affine matrix rank minimization problem into a transformed affine matrix rank minimization problem. A scheme of iterative singular value thresholding algorithm is generated to solve the regularized transformed affine matrix rank minimization problem. However, one of the drawbacks for our iterative singular value thresholding algorithm is that the parameter $a$, which influences the behaviour of non-convex fraction function in the regularized transformed affine matrix rank minimization problem, needs to be determined manually in every simulation. In fact, how to determine the optimal parameter $a$ is not an easy problem. Here instead, in this paper, we will generate an adaptive iterative singular value thresholding algorithm to solve the regularized transformed affine matrix rank minimization problem. When doing so, our new algorithm will be intelligent both for the choice of the regularized parameter $lambda$ and the parameter $a$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا