ترغب بنشر مسار تعليمي؟ اضغط هنا

Regularization parameter selection for low rank matrix recovery

212   0   0.0 ( 0 )
 نشر من قبل Pan Shang
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Low rank matrix recovery is the focus of many applications, but it is a NP-hard problem. A popular way to deal with this problem is to solve its convex relaxation, the nuclear norm regularized minimization problem (NRM), which includes LASSO as a special case. There are some regularization parameter selection results for LASSO in vector case, such as screening rules, which improve the efficiency of the algorithms. However, there are no corresponding parameter selection results for NRM in matrix case. In this paper, we build up a novel rule to choose the regularization parameter for NRM under the help of duality theory. This rule claims that the regularization parameter can be easily chosen by feasible points of NRM and its dual problem, when the rank of the desired solution is no more than a given constant. In particular, we apply this idea to NRM with least square and Huber functions, and establish the easily calculated formula of regularization parameters. Finally, we report numerical results on some signal shapes, which state that our proposed rule shrinks the interval of the regularization parameter efficiently.

قيم البحث

اقرأ أيضاً

Low-rank matrix recovery is a fundamental problem in signal processing and machine learning. A recent very popular approach to recovering a low-rank matrix X is to factorize it as a product of two smaller matrices, i.e., X = UV^T, and then optimize o ver U, V instead of X. Despite the resulting non-convexity, recent results have shown that many factorized objective functions actually have benign global geometry---with no spurious local minima and satisfying the so-called strict saddle property---ensuring convergence to a global minimum for many local-search algorithms. Such results hold whenever the original objective function is restricted strongly convex and smooth. However, most of these results actually consider a modified cost function that includes a balancing regularizer. While useful for deriving theory, this balancing regularizer does not appear to be necessary in practice. In this work, we close this theory-practice gap by proving that the unaltered factorized non-convex problem, without the balancing regularizer, also has similar benign global geometry. Moreover, we also extend our theoretical results to the field of distributed optimization.
This paper develops a new class of nonconvex regularizers for low-rank matrix recovery. Many regularizers are motivated as convex relaxations of the matrix rank function. Our new factor group-sparse regularizers are motivated as a relaxation of the n umber of nonzero columns in a factorization of the matrix. These nonconvex regularizers are sharper than the nuclear norm; indeed, we show they are related to Schatten-$p$ norms with arbitrarily small $0 < p leq 1$. Moreover, these factor group-sparse regularizers can be written in a factored form that enables efficient and effective nonconvex optimization; notably, the method does not use singular value decomposition. We provide generalization error bounds for low-rank matrix completion which show improved upper bounds for Schatten-$p$ norm reglarization as $p$ decreases. Compared to the max norm and the factored formulation of the nuclear norm, factor group-sparse regularizers are more efficient, accurate, and robust to the initial guess of rank. Experiments show promising performance of factor group-sparse regularization for low-rank matrix completion and robust principal component analysis.
119 - Pini Zilber , Boaz Nadler 2021
Low rank matrix recovery problems, including matrix completion and matrix sensing, appear in a broad range of applications. In this work we present GNMR -- an extremely simple iterative algorithm for low rank matrix recovery, based on a Gauss-Newton linearization. On the theoretical front, we derive recovery guarantees for GNMR in both the matrix sensing and matrix completion settings. A key property of GNMR is that it implicitly keeps the factor matrices approximately balanced throughout its iterations. On the empirical front, we show that for matrix completion with uniform sampling, GNMR performs better than several popular methods, especially when given very few observations close to the information limit.
80 - Richard Y. Zhang 2021
We prove that it is possible for nonconvex low-rank matrix recovery to contain no spurious local minima when the rank of the unknown ground truth $r^{star}<r$ is strictly less than the search rank $r$, and yet for the claim to be false when $r^{star} =r$. Under the restricted isometry property (RIP), we prove, for the general overparameterized regime with $r^{star}le r$, that an RIP constant of $delta<1/(1+sqrt{r^{star}/r})$ is sufficient for the inexistence of spurious local minima, and that $delta<1/(1+1/sqrt{r-r^{star}+1})$ is necessary due to existence of counterexamples. Without an explicit control over $r^{star}le r$, an RIP constant of $delta<1/2$ is both necessary and sufficient for the exact recovery of a rank-$r$ ground truth. But if the ground truth is known a priori to have $r^{star}=1$, then the sharp RIP threshold for exact recovery is improved to $delta<1/(1+1/sqrt{r})$.
The problem of recovering a low-rank matrix from the linear constraints, known as affine matrix rank minimization problem, has been attracting extensive attention in recent years. In general, affine matrix rank minimization problem is a NP-hard. In o ur latest work, a non-convex fraction function is studied to approximate the rank function in affine matrix rank minimization problem and translate the NP-hard affine matrix rank minimization problem into a transformed affine matrix rank minimization problem. A scheme of iterative singular value thresholding algorithm is generated to solve the regularized transformed affine matrix rank minimization problem. However, one of the drawbacks for our iterative singular value thresholding algorithm is that the parameter $a$, which influences the behaviour of non-convex fraction function in the regularized transformed affine matrix rank minimization problem, needs to be determined manually in every simulation. In fact, how to determine the optimal parameter $a$ is not an easy problem. Here instead, in this paper, we will generate an adaptive iterative singular value thresholding algorithm to solve the regularized transformed affine matrix rank minimization problem. When doing so, our new algorithm will be intelligent both for the choice of the regularized parameter $lambda$ and the parameter $a$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا