Do you want to publish a course? Click here

Spectral tensor-train decomposition

135   0   0.0 ( 0 )
 Added by Daniele Bigoni
 Publication date 2014
  fields
and research's language is English




Ask ChatGPT about the research

The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT decomposition and analyze its properties. We obtain results on the convergence of the decomposition, revealing links between the regularity of the function, the dimension of the input space, and the TT ranks. We also show that the regularity of the target function is preserved by the univariate functions (i.e., the cores) comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting textit{spectral tensor-train decomposition} combines the favorable dimension-scaling of the TT decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm texttt{TT-DMRG-cross} to obtain the TT decomposition of tensors resulting from suitable discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modifed set of Genz functions with dimension up to $100$, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online.



rate research

Read More

The hierarchical SVD provides a quasi-best low rank approximation of high dimensional data in the hierarchical Tucker framework. Similar to the SVD for matrices, it provides a fundamental but expensive tool for tensor computations. In the present work we examine generalizations of randomized matrix decomposition methods to higher order tensors in the framework of the hierarchical tensors representation. In particular we present and analyze a randomized algorithm for the calculation of the hierarchical SVD (HSVD) for the tensor train (TT) format.
Tensor Train decomposition is used across many branches of machine learning. We present T3F -- a library for Tensor Train decomposition based on TensorFlow. T3F supports GPU execution, batch processing, automatic differentiation, and versatile functionality for the Riemannian optimization framework, which takes into account the underlying manifold structure to construct efficient optimization methods. The library makes it easier to implement machine learning papers that rely on the Tensor Train decomposition. T3F includes documentation, examples and 94% test coverage.
There is a significant expansion in both volume and range of applications along with the concomitant increase in the variety of data sources. These ever-expanding trends have highlighted the necessity for more versatile analysis tools that offer greater opportunities for algorithmic developments and computationally faster operations than the standard flat-view matrix approach. Tensors, or multi-way arrays, provide such an algebraic framework which is naturally suited to data of such large volume, diversity, and veracity. Indeed, the associated tensor decompositions have demonstrated their potential in breaking the Curse of Dimensionality associated with traditional matrix methods, where a necessary exponential increase in data volume leads to adverse or even intractable consequences on computational complexity. A key tool underpinning multi-linear manipulation of tensors and tensor networks is the standard Tensor Contraction Product (TCP). However, depending on the dimensionality of the underlying tensors, the TCP also comes at the price of high computational complexity in tensor manipulation. In this work, we resort to diagrammatic tensor network manipulation to calculate such products in an efficient and computationally tractable manner, by making use of Tensor Train decomposition (TTD). This has rendered the underlying concepts easy to perceive, thereby enhancing intuition of the associated underlying operations, while preserving mathematical rigour. In addition to bypassing the cumbersome mathematical multi-linear expressions, the proposed Tensor Train Contraction Product model is shown to accelerate significantly the underlying computational operations, as it is independent of tensor order and linear in the tensor dimension, as opposed to performing the full computations through the standard approach (exponential in tensor order).
122 - Karim Halaseh , Tommi Muller , 2020
In this paper we study the problem of recovering a tensor network decomposition of a given tensor $mathcal{T}$ in which the tensors at the vertices of the network are orthogonally decomposable. Specifically, we consider tensor networks in the form of tensor trains (aka matrix product states). When the tensor train has length 2, and the orthogonally decomposable tensors at the two vertices of the network are symmetric, we show how to recover the decomposition by considering random linear combinations of slices. Furthermore, if the tensors at the vertices are symmetric but not orthogonally decomposable, we show that a whitening procedure can transform the problem into an orthogonal one, thereby yielding a solution for the decomposition of the tensor. When the tensor network has length 3 or more and the tensors at the vertices are symmetric and orthogonally decomposable, we provide an algorithm for recovering them subject to some rank conditions. Finally, in the case of tensor trains of length two in which the tensors at the vertices are orthogonally decomposable but not necessarily symmetric, we show that the decomposition problem reduces to the problem of a novel matrix decomposition, that of an orthogonal matrix multiplied by diagonal matrices on either side. We provide two solutions for the full-rank tensor case using Sinkhorns theorem and Procrustes algorithm, respectively, and show that the Procrustes-based solution can be generalized to any rank case. We conclude with a multitude of open problems in linear and multilinear algebra that arose in our study.
211 - Tamara G. Kolda 2015
We consider the problem of decomposing a real-valued symmetric tensor as the sum of outer products of real-valued, pairwise orthogonal vectors. Such decompositions do not generally exist, but we show that some symmetric tensor decomposition problems can be converted to orthogonal problems following the whitening procedure proposed by Anandkumar et al. (2012). If an orthogonal decomposition of an $m$-way $n$-dimensional symmetric tensor exists, we propose a novel method to compute it that reduces to an $n times n$ symmetric matrix eigenproblem. We provide numerical results demonstrating the effectiveness of the method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا