ترغب بنشر مسار تعليمي؟ اضغط هنا

Normal Forms for Tensor Rank Decomposition

74   0   0.0 ( 0 )
 نشر من قبل Simon Telen
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a new algorithm for computing the tensor rank decomposition or canonical polyadic decomposition of higher-order tensors subject to a rank and genericity constraint. Reformulating this as a system of polynomial equations allows us to leverage recent numerical linear algebra tools from computational algebraic geometry. We describe the complexity of our algorithm in terms of the multigraded regularity of a multihomogeneous ideal. We prove effective bounds for many formats and ranks and conjecture a general formula. These bounds are of independent interest for overconstrained polynomial system solving. Our experiments show that our algorithm can outperform state-of-the-art algebraic algorithms by an order of magnitude in terms of accuracy, computation time, and memory consumption.



قيم البحث

اقرأ أيضاً

122 - Karim Halaseh , Tommi Muller , 2020
In this paper we study the problem of recovering a tensor network decomposition of a given tensor $mathcal{T}$ in which the tensors at the vertices of the network are orthogonally decomposable. Specifically, we consider tensor networks in the form of tensor trains (aka matrix product states). When the tensor train has length 2, and the orthogonally decomposable tensors at the two vertices of the network are symmetric, we show how to recover the decomposition by considering random linear combinations of slices. Furthermore, if the tensors at the vertices are symmetric but not orthogonally decomposable, we show that a whitening procedure can transform the problem into an orthogonal one, thereby yielding a solution for the decomposition of the tensor. When the tensor network has length 3 or more and the tensors at the vertices are symmetric and orthogonally decomposable, we provide an algorithm for recovering them subject to some rank conditions. Finally, in the case of tensor trains of length two in which the tensors at the vertices are orthogonally decomposable but not necessarily symmetric, we show that the decomposition problem reduces to the problem of a novel matrix decomposition, that of an orthogonal matrix multiplied by diagonal matrices on either side. We provide two solutions for the full-rank tensor case using Sinkhorns theorem and Procrustes algorithm, respectively, and show that the Procrustes-based solution can be generalized to any rank case. We conclude with a multitude of open problems in linear and multilinear algebra that arose in our study.
Quaternion matrix approximation problems construct the approximated matrix via the quaternion singular value decomposition (SVD) by selecting some singular value decomposition (SVD) triplets of quaternion matrices. In applications such as color image processing and recognition problems, only a small number of dominant SVD triplets are selected, while in some applications such as quaternion total least squares problem, small SVD triplets (small singular values and associated singular vectors) and numerical rank with respect to a small threshold are required. In this paper, we propose a randomized quaternion SVD (verbrandsvdQ) method to compute a small number of SVD triplets of a large-scale quaternion matrix. Theoretical results are given about approximation errors and the corresponding algorithm adapts to the low-rank matrix approximation problem. When the restricted rank increases, it might lead to information loss of small SVD triplets. The blocked quaternion randomized SVD algorithm is then developed when the numerical rank and information about small singular values are required. For color face recognition problems, numerical results show good performance of the developed quaternion randomized SVD method for low-rank approximation of a large-scale quaternion matrix. The blocked randomized SVD algorithm is also shown to be more robust than unblocked method through several experiments, and approximation errors from the blocked scheme are very close to the optimal error obtained by truncating a full SVD.
Recovery of low-rank matrices from a small number of linear measurements is now well-known to be possible under various model assumptions on the measurements. Such results demonstrate robustness and are backed with provable theoretical guarantees. Ho wever, extensions to tensor recovery have only recently began to be studied and developed, despite an abundance of practical tensor applications. Recently, a tensor variant of the Iterative Hard Thresholding method was proposed and theoretical results were obtained that guarantee exact recovery of tensors with low Tucker rank. In this paper, we utilize the same tensor version of the Restricted Isometry Property (RIP) to extend these results for tensors with low CANDECOMP/PARAFAC (CP) rank. In doing so, we leverage recent results on efficient approximations of CP decompositions that remove the need for challenging assumptions in prior works. We complement our theoretical findings with empirical results that showcase the potential of the approach.
123 - Chao Zeng 2021
The orthogonal decomposition factorizes a tensor into a sum of an orthogonal list of rankone tensors. We present several properties of orthogonal rank. We find that a subtensor may have a larger orthogonal rank than the whole tensor and prove the low er semicontinuity of orthogonal rank. The lower semicontinuity guarantees the existence of low orthogonal rank approximation. To fit the orthogonal decomposition, we propose an algorithm based on the augmented Lagrangian method and guarantee the orthogonality by a novel orthogonalization procedure. Numerical experiments show that the proposed method has a great advantage over the existing methods for strongly orthogonal decompositions in terms of the approximation error.
We describe a simple, black-box compression format for tensors with a multiscale structure. By representing the tensor as a sum of compressed tensors defined on increasingly coarse grids, we capture low-rank structures on each grid-scale, and we show how this leads to an increase in compression for a fixed accuracy. We devise an alternating algorithm to represent a given tensor in the multiresolution format and prove local convergence guarantees. In two dimensions, we provide examples that show that this approach can beat the Eckart-Young theorem, and for dimensions higher than two, we achieve higher compression than the tensor-train format on six real-world datasets. We also provide results on the closedness and stability of the tensor format and discuss how to perform common linear algebra operations on the level of the compressed tensors.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا