ﻻ يوجد ملخص باللغة العربية
Tensor decompositions such as the canonical format and the tensor train format have been widely utilized to reduce storage costs and operational complexities for high-dimensional data, achieving linear scaling with the input dimension instead of exponential scaling. In this paper, we investigate even lower storage-cost representations in the tensor ring format, which is an extension of the tensor train format with variable end-ranks. Firstly, we introduce two algorithms for converting a tensor in full format to tensor ring format with low storage cost. Secondly, we detail a rounding operation for tensor rings and show how this requires new definitions of common linear algebra operations in the format to obtain storage-cost savings. Lastly, we introduce algorithms for transforming the graph structure of graph-based tensor formats, with orders of magnitude lower complexity than existing literature. The efficiency of all algorithms is demonstrated on a number of numerical examples, and in certain cases, we demonstrate significantly higher compression ratios when compared to previous approaches to using the tensor ring format.
In this work, we study the tensor ring decomposition and its associated numerical algorithms. We establish a sharp transition of algorithmic difficulty of the optimization problem as the bond dimension increases: On one hand, we show the existence of
We propose a new algorithm for computing the tensor rank decomposition or canonical polyadic decomposition of higher-order tensors subject to a rank and genericity constraint. Reformulating this as a system of polynomial equations allows us to levera
Tensor completion can estimate missing values of a high-order data from its partially observed entries. Recent works show that low rank tensor ring approximation is one of the most powerful tools to solve tensor completion problem. However, existing
We introduce the Subspace Power Method (SPM) for calculating the CP decomposition of low-rank even-order real symmetric tensors. This algorithm applies the tensor power method of Kolda-Mayo to a certain modified tensor, constructed from a matrix flat
In this paper, we consider the tensor completion problem, which has many researchers in the machine learning particularly concerned. Our fast and precise method is built on extending the $L_{2,1}$-norm minimization and Qatar Riyal decomposition (LNM-