ترغب بنشر مسار تعليمي؟ اضغط هنا

Entrywise Convergence of Riemannian Gradient Method for Low Rank Tensor Completion via Tucker Decomposition

206   0   0.0 ( 0 )
 نشر من قبل Ke Wei
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper is concerned with the Tucker decomposition based low rank tensor completion problem, which is about reconstructing a tensor $mathcal{T}inmathbb{R}^{ntimes ntimes n}$ of a small multilinear rank from partially observed entries. We study the convergence of the Riemannian gradient method for this problem. Guaranteed linear convergence in terms of the infinity norm has been established for this algorithm provided the number of observed entries is essentially in the order of $O(n^{3/2})$. The convergence analysis relies on the leave-one-out technique and the subspace projection structure within the algorithm. To the best of our knowledge, this is the first work that has established the entrywise convergence of a non-convex algorithm for low rank tensor completion via Tucker decomposition.



قيم البحث

اقرأ أيضاً

Tensor completion refers to the task of estimating the missing data from an incomplete measurement or observation, which is a core problem frequently arising from the areas of big data analysis, computer vision, and network engineering. Due to the mu ltidimensional nature of high-order tensors, the matrix approaches, e.g., matrix factorization and direct matricization of tensors, are often not ideal for tensor completion and recovery. In this paper, we introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion. Our model possesses a sparse regularization term to promote a sparse core tensor of the Tucker decomposition, which is beneficial for tensor data compression. Moreover, we enforce low-rank regularization terms on factor matrices of the Tucker decomposition for inducing the low-rankness of the tensor with a cheap computational cost. Numerically, we propose a customized ADMM with enough easy subproblems to solve the underlying model. It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors. A series of computational experiments on real-world data sets, including internet traffic data sets, color images, and face recognition, demonstrate that our model performs better than many existing state-of-the-art matricization and tensorization approaches in terms of achieving higher recovery accuracy.
150 - Bin Gao , P.-A. Absil 2021
The low-rank matrix completion problem can be solved by Riemannian optimization on a fixed-rank manifold. However, a drawback of the known approaches is that the rank parameter has to be fixed a priori. In this paper, we consider the optimization pro blem on the set of bounded-rank matrices. We propose a Riemannian rank-adaptive method, which consists of fixed-rank optimization, rank increase step and rank reduction step. We explore its performance applied to the low-rank matrix completion problem. Numerical experiments on synthetic and real-world datasets illustrate that the proposed rank-adaptive method compares favorably with state-of-the-art algorithms. In addition, it shows that one can incorporate each aspect of this rank-adaptive framework separately into existing algorithms for the purpose of improving performance.
153 - B. Mishra , R. Sepulchre 2013
We exploit the versatile framework of Riemannian optimization on quotient manifolds to develop R3MC, a nonlinear conjugate-gradient method for low-rank matrix completion. The underlying search space of fixed-rank matrices is endowed with a novel Riem annian metric that is tailored to the least-squares cost. Numerical comparisons suggest that R3MC robustly outperforms state-of-the-art algorithms across different problem instances, especially those that combine scarcely sampled and ill-conditioned data.
292 - An-Bao Xu 2020
This paper considers the completion problem for a tensor (also referred to as a multidimensional array) from limited sampling. Our greedy method is based on extending the low-rank approximation pursuit (LRAP) method for matrix completions to tensor c ompletions. The method performs a tensor factorization using the tensor singular value decomposition (t-SVD) which extends the standard matrix SVD to tensors. The t-SVD leads to a notion of rank, called tubal-rank here. We want to recreate the data in tensors from low resolution samples as best we can here. To complete a low resolution tensor successfully we assume that the given tensor data has low tubal-rank. For tensors of low tubal-rank, we establish convergence results for our method that are based on the tensor restricted isometry property (TRIP). Our result with the TRIP condition for tensors is similar to low-rank matrix completions under the RIP condition. The TRIP condition uses the t-SVD for low tubal-rank tensors, while RIP uses the SVD for matrices. We show that a subgaussian measurement map satisfies the TRIP condition with high probability and gives an almost optimal bound on the number of required measurements. We compare the numerical performance of the proposed algorithm with those for state-of-the-art approaches on video recovery and color image recovery.
97 - Tian Ye , Simon S. Du 2021
We study the asymmetric low-rank factorization problem: [min_{mathbf{U} in mathbb{R}^{m times d}, mathbf{V} in mathbb{R}^{n times d}} frac{1}{2}|mathbf{U}mathbf{V}^top -mathbf{Sigma}|_F^2] where $mathbf{Sigma}$ is a given matrix of size $m times n$ a nd rank $d$. This is a canonical problem that admits two difficulties in optimization: 1) non-convexity and 2) non-smoothness (due to unbalancedness of $mathbf{U}$ and $mathbf{V}$). This is also a prototype for more complex problems such as asymmetric matrix sensing and matrix completion. Despite being non-convex and non-smooth, it has been observed empirically that the randomly initialized gradient descent algorithm can solve this problem in polynomial time. Existing theories to explain this phenomenon all require artificial modifications of the algorithm, such as adding noise in each iteration and adding a balancing regularizer to balance the $mathbf{U}$ and $mathbf{V}$. This paper presents the first proof that shows randomly initialized gradient descent converges to a global minimum of the asymmetric low-rank factorization problem with a polynomial rate. For the proof, we develop 1) a new symmetrization technique to capture the magnitudes of the symmetry and asymmetry, and 2) a quantitative perturbation analysis to approximate matrix derivatives. We believe both are useful for other related non-convex problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا