ترغب بنشر مسار تعليمي؟ اضغط هنا

Kernel approximation on algebraic varieties

311   0   0.0 ( 0 )
 نشر من قبل Jason Altschuler
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Low-rank approximation of kernels is a fundamental mathematical problem with widespread algorithmic applications. Often the kernel is restricted to an algebraic variety, e.g., in problems involving sparse or low-rank data. We show that significantly better approximations are obtainable in this setting: the rank required to achieve a given error depends on the varietys dimension rather than the ambient dimension, which is typically much larger. This is true in both high-precision and high-dimensional regimes. Our results are presented for smooth isotropic kernels, the predominant class of kernels used in applications. Our main technical insight is to approximate smooth kernels by polynomial kernels, and leverage two key properties of polynomial kernels that hold when they are restricted to a variety. First, their ranks decrease exponentially in the varietys co-dimension. Second, their maximum values are governed by their values over a small set of points. Together, our results provide a general approach for exploiting (approximate) algebraic structure in datasets in order to efficiently solve large-scale data science problems.



قيم البحث

اقرأ أيضاً

Random Fourier features is one of the most popular techniques for scaling up kernel methods, such as kernel ridge regression. However, despite impressive empirical results, the statistical properties of random Fourier features are still not well unde rstood. In this paper we take steps toward filling this gap. Specifically, we approach random Fourier features from a spectral matrix approximation point of view, give tight bounds on the number of Fourier features required to achieve a spectral approximation, and show how spectral matrix approximation bounds imply statistical guarantees for kernel ridge regression. Qualitatively, our results are twofold: on the one hand, we show that random Fourier feature approximation can provably speed up kernel ridge regression under reasonable assumptions. At the same time, we show that the method is suboptimal, and sampling from a modified distribution in Fourier space, given by the leverage function of the kernel, yields provably better performance. We study this optimal sampling distribution for the Gaussian kernel, achieving a nearly complete characterization for the case of low-dimensional bounded datasets. Based on this characterization, we propose an efficient sampling scheme with guarantees superior to random Fourier features in this regime.
Coisotropic deformations of algebraic varieties are defined as those for which an ideal of the deformed variety is a Poisson ideal. It is shown that coisotropic deformations of sets of intersection points of plane quadrics, cubics and space algebraic curves are governed, in particular, by the dKP, WDVV, dVN, d2DTL equations and other integrable hydrodynamical type systems. Particular attention is paid to the study of two- and three-dimensional deformations of elliptic curves. Problem of an appropriate choice of Poisson structure is discussed.
While Generative Adversarial Networks (GANs) have demonstrated promising performance on multiple vision tasks, their learning dynamics are not yet well understood, both in theory and in practice. To address this issue, we study GAN dynamics in a simp le yet rich parametric model that exhibits several of the common problematic convergence behaviors such as vanishing gradients, mode collapse, and diverging or oscillatory behavior. In spite of the non-convex nature of our model, we are able to perform a rigorous theoretical analysis of its convergence behavior. Our analysis reveals an interesting dichotomy: a GAN with an optimal discriminator provably converges, while first order approximations of the discriminator steps lead to unstable GAN dynamics and mode collapse. Our result suggests that using first order discriminator steps (the de-facto standard in most existing GAN setups) might be one of the factors that makes GAN training challenging in practice.
220 - Amitabha Bagchi 2020
These lecture notes endeavour to collect in one place the mathematical background required to understand the properties of kernels in general and the Random Fourier Features approximation of Rahimi and Recht (NIPS 2007) in particular. We briefly moti vate the use of kernels in Machine Learning with the example of the support vector machine. We discuss positive definite and conditionally negative definite kernels in some detail. After a brief discussion of Hilbert spaces, including the Reproducing Kernel Hilbert Space construction, we present Mercers theorem. We discuss the Random Fourier Features technique and then present, with proofs, scalar and matrix concentration results that help us estimate the error incurred by the technique. These notes are the transcription of 10 lectures given at IIT Delhi between January and April 2020.
211 - Heinrich Massold 2016
Because of its ineffectiveness, the usual arithmetic Hilbert-Samuel formula is not applicable in the context of Diophantine Approximation. In order to overcome this difficulty, the present paper presents explicit estimates for arithmetic Hilbert Functions of closed subvarieties in projective space.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا