Do you want to publish a course? Click here

Fast and Accurate Randomized Algorithms for Low-rank Tensor Decompositions

86   0   0.0 ( 0 )
 Added by Linjian Ma
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Low-rank Tucker and CP tensor decompositions are powerful tools in data analytics. The widely used alternating least squares (ALS) method, which solves a sequence of over-determined least squares subproblems, is costly for large and sparse tensors. We propose a fast and accurate sketched ALS algorithm for Tucker decomposition, which solves a sequence of sketched rank-constrained linear least squares subproblems. Theoretical sketch size upper bounds are provided to achieve $O(epsilon)$ relative error for each subproblem with two sketching techniques, TensorSketch and leverage score sampling. Experimental results show that this new ALS algorithm, combined with a new initialization scheme based on randomized range finder, yields up to $22.0%$ relative decomposition residual improvement compared to the state-of-the-art sketched randomized algorithm for Tucker decomposition of various synthetic and real datasets. This Tucker-ALS algorithm is further used to accelerate CP decomposition, by using randomized Tucker compression followed by CP decomposition of the Tucker core tensor. Experimental results show that this algorithm not only converges faster, but also yields more accurate CP decompositions.



rate research

Read More

More recently, an Approximate SVD Based on Qatar Riyal (QR) Decomposition (CSVD-QR) method for matrix complete problem is presented, whose computational complexity is $O(r^2(m+n))$, which is mainly due to that $r$ is far less than $min{m,n}$, where $r$ represents the largest number of singular values of matrix $X$. What is particularly interesting is that after replacing the nuclear norm with the $L_{2,1}$ norm proposed based on this decomposition, as the upper bound of the nuclear norm, when the intermediate matrix $D$ in its decomposition is close to the diagonal matrix, it will converge to the nuclear norm, and is exactly equal, when the $D$ matrix is equal to the diagonal matrix, to the nuclear norm, which ingeniously avoids the calculation of the singular value of the matrix. To the best of our knowledge, there is no literature to generalize and apply it to solve tensor complete problems. Inspired by this, in this paper we propose a class of tensor minimization model based on $L_{2,1}$ norm and CSVD-QR method for the tensor complete problem, which is convex and therefore has a global minimum solution.
123 - Chao Zeng 2021
The orthogonal decomposition factorizes a tensor into a sum of an orthogonal list of rankone tensors. We present several properties of orthogonal rank. We find that a subtensor may have a larger orthogonal rank than the whole tensor and prove the lower semicontinuity of orthogonal rank. The lower semicontinuity guarantees the existence of low orthogonal rank approximation. To fit the orthogonal decomposition, we propose an algorithm based on the augmented Lagrangian method and guarantee the orthogonality by a novel orthogonalization procedure. Numerical experiments show that the proposed method has a great advantage over the existing methods for strongly orthogonal decompositions in terms of the approximation error.
86 - Talal Ahmed , Haroon Raja , 2019
This paper studies a tensor-structured linear regression model with a scalar response variable and tensor-structured predictors, such that the regression parameters form a tensor of order $d$ (i.e., a $d$-fold multiway array) in $mathbb{R}^{n_1 times n_2 times cdots times n_d}$. It focuses on the task of estimating the regression tensor from $m$ realizations of the response variable and the predictors where $mll n = prod olimits_{i} n_i$. Despite the seeming ill-posedness of this problem, it can still be solved if the parameter tensor belongs to the space of sparse, low Tucker-rank tensors. Accordingly, the estimation procedure is posed as a non-convex optimization program over the space of sparse, low Tucker-rank tensors, and a tensor variant of projected gradient descent is proposed to solve the resulting non-convex problem. In addition, mathematical guarantees are provided that establish the proposed method linearly converges to an appropriate solution under a certain set of conditions. Further, an upper bound on sample complexity of tensor parameter estimation for the model under consideration is characterized for the special case when the individual (scalar) predictors independently draw values from a sub-Gaussian distribution. The sample complexity bound is shown to have a polylogarithmic dependence on $bar{n} = max big{n_i: iin {1,2,ldots,d } big}$ and, orderwise, it matches the bound one can obtain from a heuristic parameter counting argument. Finally, numerical experiments demonstrate the efficacy of the proposed tensor model and estimation method on a synthetic dataset and a collection of neuroimaging datasets pertaining to attention deficit hyperactivity disorder. Specifically, the proposed method exhibits better sample complexities on both synthetic and real datasets, demonstrating the usefulness of the model and the method in settings where $n gg m$.
Quaternion matrix approximation problems construct the approximated matrix via the quaternion singular value decomposition (SVD) by selecting some singular value decomposition (SVD) triplets of quaternion matrices. In applications such as color image processing and recognition problems, only a small number of dominant SVD triplets are selected, while in some applications such as quaternion total least squares problem, small SVD triplets (small singular values and associated singular vectors) and numerical rank with respect to a small threshold are required. In this paper, we propose a randomized quaternion SVD (verbrandsvdQ) method to compute a small number of SVD triplets of a large-scale quaternion matrix. Theoretical results are given about approximation errors and the corresponding algorithm adapts to the low-rank matrix approximation problem. When the restricted rank increases, it might lead to information loss of small SVD triplets. The blocked quaternion randomized SVD algorithm is then developed when the numerical rank and information about small singular values are required. For color face recognition problems, numerical results show good performance of the developed quaternion randomized SVD method for low-rank approximation of a large-scale quaternion matrix. The blocked randomized SVD algorithm is also shown to be more robust than unblocked method through several experiments, and approximation errors from the blocked scheme are very close to the optimal error obtained by truncating a full SVD.
Low rank tensor approximation is a fundamental tool in modern machine learning and data science. In this paper, we study the characterization, perturbation analysis, and an efficient sampling strategy for two primary tensor CUR approximations, namely Chidori and Fiber CUR. We characterize exact tensor CUR decompositions for low multilinear rank tensors. We also present theoretical error bounds of the tensor CUR approximations when (adversarial or Gaussian) noise appears. Moreover, we show that low cost uniform sampling is sufficient for tensor CUR approximations if the tensor has an incoherent structure. Empirical performance evaluations, with both synthetic and real-world datasets, establish the speed advantage of the tensor CUR approximations over other state-of-the-art low multilinear rank tensor approximations.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا