No Arabic abstract
This paper considers the completion problem for a tensor (also referred to as a multidimensional array) from limited sampling. Our greedy method is based on extending the low-rank approximation pursuit (LRAP) method for matrix completions to tensor completions. The method performs a tensor factorization using the tensor singular value decomposition (t-SVD) which extends the standard matrix SVD to tensors. The t-SVD leads to a notion of rank, called tubal-rank here. We want to recreate the data in tensors from low resolution samples as best we can here. To complete a low resolution tensor successfully we assume that the given tensor data has low tubal-rank. For tensors of low tubal-rank, we establish convergence results for our method that are based on the tensor restricted isometry property (TRIP). Our result with the TRIP condition for tensors is similar to low-rank matrix completions under the RIP condition. The TRIP condition uses the t-SVD for low tubal-rank tensors, while RIP uses the SVD for matrices. We show that a subgaussian measurement map satisfies the TRIP condition with high probability and gives an almost optimal bound on the number of required measurements. We compare the numerical performance of the proposed algorithm with those for state-of-the-art approaches on video recovery and color image recovery.
Information is extracted from large and sparse data sets organized as 3-mode tensors. Two methods are described, based on best rank-(2,2,2) and rank-(2,2,1) approximation of the tensor. The first method can be considered as a generalization of spectral graph partitioning to tensors, and it gives a reordering of the tensor that clusters the information. The second method gives an expansion of the tensor in sparse rank-(2,2,1) terms, where the terms correspond to graphs. The low-rank approximations are computed using an efficient Krylov-Schur type algorithm that avoids filling in the sparse data. The methods are applied to topic search in news text, a tensor representing conference author-terms-years, and network traffic logs.
The problem of partitioning a large and sparse tensor is considered, where the tensor consists of a sequence of adjacency matrices. Theory is developed that is a generalization of spectral graph partitioning. A best rank-$(2,2,lambda)$ approximation is computed for $lambda=1,2,3$, and the partitioning is computed from the orthogonal matrices and the core tensor of the approximation. It is shown that if the tensor has a certain reducibility structure, then the solution of the best approximation problem exhibits the reducibility structure of the tensor. Further, if the tensor is close to being reducible, then still the solution of the exhibits the structure of the tensor. Numerical examples with synthetic data corroborate the theoretical results. Experiments with tensors from applications show that the method can be used to extract relevant information from large, sparse, and noisy data.
We describe a simple, black-box compression format for tensors with a multiscale structure. By representing the tensor as a sum of compressed tensors defined on increasingly coarse grids, we capture low-rank structures on each grid-scale, and we show how this leads to an increase in compression for a fixed accuracy. We devise an alternating algorithm to represent a given tensor in the multiresolution format and prove local convergence guarantees. In two dimensions, we provide examples that show that this approach can beat the Eckart-Young theorem, and for dimensions higher than two, we achieve higher compression than the tensor-train format on six real-world datasets. We also provide results on the closedness and stability of the tensor format and discuss how to perform common linear algebra operations on the level of the compressed tensors.
In this paper, we consider the tensor completion problem, which has many researchers in the machine learning particularly concerned. Our fast and precise method is built on extending the $L_{2,1}$-norm minimization and Qatar Riyal decomposition (LNM-QR) method for matrix completions to tensor completions, and is different from the popular tensor completion methods using the tensor singular value decomposition (t-SVD). In terms of shortening the computing time, t-SVD is replaced with the method computing an approximate t-SVD based on Qatar Riyal decomposition (CTSVD-QR), which can be used to compute the largest $r left(r>0 right)$ singular values (tubes) and their associated singular vectors (of tubes) iteratively. We, in addition, use the tensor $L_{2,1}$-norm instead of the tensor nuclear norm to minimize our model on account of it is easy to optimize. Then in terms of improving accuracy, ADMM, a gradient-search-based method, plays a crucial part in our method. Numerical experimental results show that our method is faster than those state-of-the-art algorithms and have excellent accuracy.
Higher-order low-rank tensor arises in many data processing applications and has attracted great interests. Inspired by low-rank approximation theory, researchers have proposed a series of effective tensor completion methods. However, most of these methods directly consider the global low-rankness of underlying tensors, which is not sufficient for a low sampling rate; in addition, the single nuclear norm or its relaxation is usually adopted to approximate the rank function, which would lead to suboptimal solution deviated from the original one. To alleviate the above problems, in this paper, we propose a novel low-rank approximation of tensor multi-modes (LRATM), in which a double nonconvex $L_{gamma}$ norm is designed to represent the underlying joint-manifold drawn from the modal factorization factors of the underlying tensor. A block successive upper-bound minimization method-based algorithm is designed to efficiently solve the proposed model, and it can be demonstrated that our numerical scheme converges to the coordinatewise minimizers. Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors with significantly fewer samples than the compared methods.