ﻻ يوجد ملخص باللغة العربية
There is a significant expansion in both volume and range of applications along with the concomitant increase in the variety of data sources. These ever-expanding trends have highlighted the necessity for more versatile analysis tools that offer greater opportunities for algorithmic developments and computationally faster operations than the standard flat-view matrix approach. Tensors, or multi-way arrays, provide such an algebraic framework which is naturally suited to data of such large volume, diversity, and veracity. Indeed, the associated tensor decompositions have demonstrated their potential in breaking the Curse of Dimensionality associated with traditional matrix methods, where a necessary exponential increase in data volume leads to adverse or even intractable consequences on computational complexity. A key tool underpinning multi-linear manipulation of tensors and tensor networks is the standard Tensor Contraction Product (TCP). However, depending on the dimensionality of the underlying tensors, the TCP also comes at the price of high computational complexity in tensor manipulation. In this work, we resort to diagrammatic tensor network manipulation to calculate such products in an efficient and computationally tractable manner, by making use of Tensor Train decomposition (TTD). This has rendered the underlying concepts easy to perceive, thereby enhancing intuition of the associated underlying operations, while preserving mathematical rigour. In addition to bypassing the cumbersome mathematical multi-linear expressions, the proposed Tensor Train Contraction Product model is shown to accelerate significantly the underlying computational operations, as it is independent of tensor order and linear in the tensor dimension, as opposed to performing the full computations through the standard approach (exponential in tensor order).
The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We
In this paper, we consider the tensor completion problem, which has many researchers in the machine learning particularly concerned. Our fast and precise method is built on extending the $L_{2,1}$-norm minimization and Qatar Riyal decomposition (LNM-
The orthogonal decomposition factorizes a tensor into a sum of an orthogonal list of rankone tensors. We present several properties of orthogonal rank. We find that a subtensor may have a larger orthogonal rank than the whole tensor and prove the low
Perturbation analysis has been primarily considered to be one of the main issues in many fields and considerable progress, especially getting involved with matrices, has been made from then to now. In this paper, we pay our attention to the perturbat
Tensor Train decomposition is used across many branches of machine learning. We present T3F -- a library for Tensor Train decomposition based on TensorFlow. T3F supports GPU execution, batch processing, automatic differentiation, and versatile functi