ترغب بنشر مسار تعليمي؟ اضغط هنا

A block-sparse Tensor Train Format for sample-efficient high-dimensional Polynomial Regression

65   0   0.0 ( 0 )
 نشر من قبل Philipp Trunschke
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Low-rank tensors are an established framework for high-dimensional least-squares problems. We propose to extend this framework by including the concept of block-sparsity. In the context of polynomial regression each sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials. This allows us to adapt the ansatz space to align better with known sample complexity results. The resulting method is tested in numerical experiments and demonstrates improved computational resource utilization and sample efficiency.


قيم البحث

اقرأ أيضاً

This paper is concerned with improving the empirical convergence speed of block-coordinate descent algorithms for approximate nonnegative tensor factorization (NTF). We propose an extrapolation strategy in-between block updates, referred to as heuris tic extrapolation with restarts (HER). HER significantly accelerates the empirical convergence speed of most existing block-coordinate algorithms for dense NTF, in particular for challenging computational scenarios, while requiring a negligible additional computational budget.
In this work we propose an efficient black-box solver for two-dimensional stationary diffusion equations, which is based on a new robust discretization scheme. The idea is to formulate an equation in a certain form without derivatives with a non-loca l stencil, which leads us to a linear system of equations with dense matrix. This matrix and a right-hand side are represented in a low-rank parametric representation -- the quantized tensor train (QTT-) format, and then all operations are performed with logarithmic complexity and memory consumption. Hence very fine grids can be used, and very accurate solutions with extremely high spatial resolution can be obtained. Numerical experiments show that this formulation gives accurate results and can be used up to $2^{60}$ grid points with no problems with conditioning, while total computational time is around several seconds.
Combination of low-tensor rank techniques and the Fast Fourier transform (FFT) based methods had turned out to be prominent in accelerating various statistical operations such as Kriging, computing conditional covariance, geostatistical optimal desig n, and others. However, the approximation of a full tensor by its low-rank format can be computationally formidable. In this work, we incorporate the robust Tensor Train (TT) approximation of covariance matrices and the efficient TT-Cross algorithm into the FFT-based Kriging. It is shown that here the computational complexity of Kriging is reduced to $mathcal{O}(d r^3 n)$, where $n$ is the mode size of the estimation grid, $d$ is the number of variables (the dimension), and $r$ is the rank of the TT approximation of the covariance matrix. For many popular covariance functions the TT rank $r$ remains stable for increasing $n$ and $d$. The advantages of this approach against those using plain FFT are demonstrated in synthetic and real data examples.
The tensor train approximation of electronic wave functions lies at the core of the QC-DMRG (Quantum Chemistry Density Matrix Renormalization Group) method, a recent state-of-the-art method for numerically solving the $N$-electron Schrodinger equatio n. It is well known that the accuracy of TT approximations is governed by the tail of the associated singular values, which in turn strongly depends on the ordering of the one-body basis. Here we find that the singular values $s_1ge s_2ge ... ge s_d$ of tensors representing ground states of noninteracting Hamiltonians possess a surprising inversion symmetry, $s_1s_d=s_2s_{d-1}=s_3s_{d-2}=...$, thus reducing the tail behaviour to a single hidden invariant, which moreover depends explicitly on the ordering of the basis. For correlated wavefunctions, we find that the tail is upper bounded by a suitable superposition of the invariants. Optimizing the invariants or their superposition thus provides a new ordering scheme for QC-DMRG. Numerical tests on simple examples, i.e. linear combinations of a few Slater determinants, show that the new scheme reduces the tail of the singular values by several orders of magnitudes over existing methods, including the widely used Fiedler order.
Recently, the deep learning method has been used for solving forward-backward stochastic differential equations (FBSDEs) and parabolic partial differential equations (PDEs). It has good accuracy and performance for high-dimensional problems. In this paper, we mainly solve fully coupled FBSDEs through deep learning and provide three algorithms. Several numerical results show remarkable performance especially for high-dimensional cases.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا