ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast and Accurate Low-Rank Tensor Completion Methods Based on QR Decomposition and $L_{2,1}$ Norm Minimization

136   0   0.0 ( 0 )
 نشر من قبل HongBing Zhang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

More recently, an Approximate SVD Based on Qatar Riyal (QR) Decomposition (CSVD-QR) method for matrix complete problem is presented, whose computational complexity is $O(r^2(m+n))$, which is mainly due to that $r$ is far less than $min{m,n}$, where $r$ represents the largest number of singular values of matrix $X$. What is particularly interesting is that after replacing the nuclear norm with the $L_{2,1}$ norm proposed based on this decomposition, as the upper bound of the nuclear norm, when the intermediate matrix $D$ in its decomposition is close to the diagonal matrix, it will converge to the nuclear norm, and is exactly equal, when the $D$ matrix is equal to the diagonal matrix, to the nuclear norm, which ingeniously avoids the calculation of the singular value of the matrix. To the best of our knowledge, there is no literature to generalize and apply it to solve tensor complete problems. Inspired by this, in this paper we propose a class of tensor minimization model based on $L_{2,1}$ norm and CSVD-QR method for the tensor complete problem, which is convex and therefore has a global minimum solution.



قيم البحث

اقرأ أيضاً

105 - Yongming Zheng , An-Bao Xu 2020
In this paper, we consider the tensor completion problem, which has many researchers in the machine learning particularly concerned. Our fast and precise method is built on extending the $L_{2,1}$-norm minimization and Qatar Riyal decomposition (LNM- QR) method for matrix completions to tensor completions, and is different from the popular tensor completion methods using the tensor singular value decomposition (t-SVD). In terms of shortening the computing time, t-SVD is replaced with the method computing an approximate t-SVD based on Qatar Riyal decomposition (CTSVD-QR), which can be used to compute the largest $r left(r>0 right)$ singular values (tubes) and their associated singular vectors (of tubes) iteratively. We, in addition, use the tensor $L_{2,1}$-norm instead of the tensor nuclear norm to minimize our model on account of it is easy to optimize. Then in terms of improving accuracy, ADMM, a gradient-search-based method, plays a crucial part in our method. Numerical experimental results show that our method is faster than those state-of-the-art algorithms and have excellent accuracy.
Low-rank Tucker and CP tensor decompositions are powerful tools in data analytics. The widely used alternating least squares (ALS) method, which solves a sequence of over-determined least squares subproblems, is costly for large and sparse tensors. W e propose a fast and accurate sketched ALS algorithm for Tucker decomposition, which solves a sequence of sketched rank-constrained linear least squares subproblems. Theoretical sketch size upper bounds are provided to achieve $O(epsilon)$ relative error for each subproblem with two sketching techniques, TensorSketch and leverage score sampling. Experimental results show that this new ALS algorithm, combined with a new initialization scheme based on randomized range finder, yields up to $22.0%$ relative decomposition residual improvement compared to the state-of-the-art sketched randomized algorithm for Tucker decomposition of various synthetic and real datasets. This Tucker-ALS algorithm is further used to accelerate CP decomposition, by using randomized Tucker compression followed by CP decomposition of the Tucker core tensor. Experimental results show that this algorithm not only converges faster, but also yields more accurate CP decompositions.
Some fast algorithms for computing the eigenvalues of a block companion matrix $A = U + XY^H$, where $Uin mathbb C^{ntimes n}$ is unitary block circulant and $X, Y inmathbb{C}^{n times k}$, have recently appeared in the literature. Most of these algo rithms rely on the decomposition of $A$ as product of scalar companion matrices which turns into a factored representation of the Hessenberg reduction of $A$. In this paper we generalize the approach to encompass Hessenberg matrices of the form $A=U + XY^H$ where $U$ is a general unitary matrix. A remarkable case is $U$ unitary diagonal which makes possible to deal with interpolation techniques for rootfinding problems and nonlinear eigenvalue problems. Our extension exploits the properties of a larger matrix $hat A$ obtained by a certain embedding of the Hessenberg reduction of $A$ suitable to maintain its structural properties. We show that $hat A$ can be factored as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first $k$ rows, and, moreover, such a data-sparse representation is well suited for the design of fast eigensolvers based on the QR/QZ iteration. The resulting algorithm is fast and backward stable.
Tensor completion refers to the task of estimating the missing data from an incomplete measurement or observation, which is a core problem frequently arising from the areas of big data analysis, computer vision, and network engineering. Due to the mu ltidimensional nature of high-order tensors, the matrix approaches, e.g., matrix factorization and direct matricization of tensors, are often not ideal for tensor completion and recovery. In this paper, we introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion. Our model possesses a sparse regularization term to promote a sparse core tensor of the Tucker decomposition, which is beneficial for tensor data compression. Moreover, we enforce low-rank regularization terms on factor matrices of the Tucker decomposition for inducing the low-rankness of the tensor with a cheap computational cost. Numerically, we propose a customized ADMM with enough easy subproblems to solve the underlying model. It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors. A series of computational experiments on real-world data sets, including internet traffic data sets, color images, and face recognition, demonstrate that our model performs better than many existing state-of-the-art matricization and tensorization approaches in terms of achieving higher recovery accuracy.
292 - An-Bao Xu 2020
This paper considers the completion problem for a tensor (also referred to as a multidimensional array) from limited sampling. Our greedy method is based on extending the low-rank approximation pursuit (LRAP) method for matrix completions to tensor c ompletions. The method performs a tensor factorization using the tensor singular value decomposition (t-SVD) which extends the standard matrix SVD to tensors. The t-SVD leads to a notion of rank, called tubal-rank here. We want to recreate the data in tensors from low resolution samples as best we can here. To complete a low resolution tensor successfully we assume that the given tensor data has low tubal-rank. For tensors of low tubal-rank, we establish convergence results for our method that are based on the tensor restricted isometry property (TRIP). Our result with the TRIP condition for tensors is similar to low-rank matrix completions under the RIP condition. The TRIP condition uses the t-SVD for low tubal-rank tensors, while RIP uses the SVD for matrices. We show that a subgaussian measurement map satisfies the TRIP condition with high probability and gives an almost optimal bound on the number of required measurements. We compare the numerical performance of the proposed algorithm with those for state-of-the-art approaches on video recovery and color image recovery.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا