ترغب بنشر مسار تعليمي؟ اضغط هنا

Universal approximation of symmetric and anti-symmetric functions

125   0   0.0 ( 0 )
 نشر من قبل Linfeng Zhang
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider universal approximations of symmetric and anti-symmetric functions, which are important for applications in quantum physics, as well as other scientific and engineering computations. We give constructive approximations with explicit bounds on the number of parameters with respect to the dimension and the target accuracy $epsilon$. While the approximation still suffers from curse of dimensionality, to the best of our knowledge, these are first results in the literature with explicit error bounds. Moreover, we also discuss neural network architecture that can be suitable for approximating symmetric and anti-symmetric functions.



قيم البحث

اقرأ أيضاً

Positive semi-definite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with su ch a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We will show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss some new an old bounds and include several examples where an expected error norm can be computed exactly.
We study the problem of finding orthogonal low-rank approximations of symmetric tensors. In the case of matrices, the approximation is a truncated singular value decomposition which is then symmetric. Moreover, for rank-one approximations of tensors of any dimension, a classical result proven by Banach in 1938 shows that the optimal approximation can always be chosen to be symmetric. In contrast to these results, this article shows that the corresponding statement is no longer true for orthogonal approximations of higher rank. Specifically, for any of the four common notions of tensor orthogonality used in the literature, we show that optimal orthogonal approximations of rank greater than one cannot always be chosen to be symmetric.
207 - Jiawang Nie , Ke Ye , Lihong Zhi 2020
This paper discusses the problem of symmetric tensor decomposition on a given variety $X$: decomposing a symmetric tensor into the sum of tensor powers of vectors contained in $X$. In this paper, we first study geometric and algebraic properties of s uch decomposable tensors, which are crucial to the practical computations of such decompositions. For a given tensor, we also develop a criterion for the existence of a symmetric decomposition on $X$. Secondly and most importantly, we propose a method for computing symmetric tensor decompositions on an arbitrary $X$. As a specific application, Vandermonde decompositions for nonsymmetric tensors can be computed by the proposed algorithm.
In this note, we show a sublinear nonergodic convergence rate for the algorithm developed in [Bai, et al. Generalized symmetric ADMM for separable convex optimization. Comput. Optim. Appl. 70, 129-170 (2018)], as well as its linear convergence under assumptions that the sub-differential of each component objective function is piecewise linear and all the constraint sets are polyhedra. These remaining convergence results are established for the stepsize parameters of dual variables belonging to a special isosceles triangle region, which aims to strengthen our understanding for convergence of the generalized symmetric ADMM.
218 - Tamara G. Kolda 2015
We consider the problem of decomposing a real-valued symmetric tensor as the sum of outer products of real-valued, pairwise orthogonal vectors. Such decompositions do not generally exist, but we show that some symmetric tensor decomposition problems can be converted to orthogonal problems following the whitening procedure proposed by Anandkumar et al. (2012). If an orthogonal decomposition of an $m$-way $n$-dimensional symmetric tensor exists, we propose a novel method to compute it that reduces to an $n times n$ symmetric matrix eigenproblem. We provide numerical results demonstrating the effectiveness of the method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا