ترغب بنشر مسار تعليمي؟ اضغط هنا

Geometric Scattering on Manifolds

61   0   0.0 ( 0 )
 نشر من قبل Michael Perlmutter
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of the success of convolutional neural networks (ConvNets) in image data analysis and other tasks. Inspired by recent interest in geometric deep learning, which aims to generalize ConvNets to manifold and graph-structured domains, we generalize the scattering transform to compact manifolds. Similar to the Euclidean scattering transform, our geometric scattering transform is based on a cascade of designed filters and pointwise nonlinearities, which enables rigorous analysis of the feature extraction provided by scattering layers. Our main focus here is on theoretical understanding of this geometric scattering network, while setting aside implementation aspects, although we remark that application of similar transforms to graph data analysis has been studied recently in related work. Our results establish conditions under which geometric scattering provides localized isometry invariant descriptions of manifold signals, which are also stable to families of diffeomorphisms formulated in intrinsic manifolds terms. These results not only generalize the deformation stability and local roto-translation invariance of Euclidean scattering, but also demonstrate the importance of linking the used filter structures (e.g., in geometric deep learning) to the underlying manifold geometry, or the data geometry it represents.



قيم البحث

اقرأ أيضاً

The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of convolutional neural networks. Inspired by recent interest in geometric deep learning, which aims to generalize convolutional neural ne tworks to manifold and graph-structured domains, we define a geometric scattering transform on manifolds. Similar to the Euclidean scattering transform, the geometric scattering transform is based on a cascade of wavelet filters and pointwise nonlinearities. It is invariant to local isometries and stable to certain types of diffeomorphisms. Empirical results demonstrate its utility on several geometric learning tasks. Our results generalize the deformation stability and local translation invariance of Euclidean scattering, and demonstrate the importance of linking the used filter structures to the underlying geometry of the data.
We are interested in learning generative models for complex geometries described via manifolds, such as spheres, tori, and other implicit surfaces. Current extensions of existing (Euclidean) generative models are restricted to specific geometries and typically suffer from high computational costs. We introduce Moser Flow (MF), a new class of generative models within the family of continuous normalizing flows (CNF). MF also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN). The divergence is a local, linear differential operator, easy to approximate and calculate on manifolds. Therefore, unlike other CNFs, MF does not require invoking or backpropagating through an ODE solver during training. Furthermore, representing the model density explicitly as the divergence of a NN rather than as a solution of an ODE facilitates learning high fidelity densities. Theoretically, we prove that MF constitutes a universal density approximator under suitable assumptions. Empirically, we demonstrate for the first time the use of flow models for sampling from general curved surfaces and achieve significant improvements in density estimation, sample quality, and training complexity over existing CNFs on challenging synthetic geometries and real-world benchmarks from the earth and climate sciences.
The scattering transform is a multilayered wavelet-based deep learning architecture that acts as a model of convolutional neural networks. Recently, several works have introduced generalizations of the scattering transform for non-Euclidean settings such as graphs. Our work builds upon these constructions by introducing windowed and non-windowed graph scattering transforms based upon a very general class of asymmetric wavelets. We show that these asymmetric graph scattering transforms have many of the same theoretical guarantees as their symmetric counterparts. This work helps bridge the gap between scattering and other graph neural networks by introducing a large family of networks with provable stability and invariance guarantees. This lays the groundwork for future deep learning architectures for graph-structured data that have learned filters and also provably have desirable theoretical properties.
We study the propagator of the wave equation on a closed Riemannian manifold $M$. We propose a geometric approach to the construction of the propagator as a single oscillatory integral global both in space and in time with a distinguished complex-val ued phase function. This enables us to provide a global invariant definition of the full symbol of the propagator - a scalar function on the cotangent bundle - and an algorithm for the explicit calculation of its homogeneous components. The central part of the paper is devoted to the detailed analysis of the subprincipal symbol; in particular, we derive its explicit small time asymptotic expansion. We present a general geometric construction that allows one to visualise topological obstructions and describe their circumvention with the use of a complex-valued phase function. We illustrate the general framework with explicit examples in dimension two.
We consider the regression problem of estimating functions on $mathbb{R}^D$ but supported on a $d$-dimensional manifold $ mathcal{M} subset mathbb{R}^D $ with $ d ll D $. Drawing ideas from multi-resolution analysis and nonlinear approximation, we co nstruct low-dimensional coordinates on $mathcal{M}$ at multiple scales, and perform multiscale regression by local polynomial fitting. We propose a data-driven wavelet thresholding scheme that automatically adapts to the unknown regularity of the function, allowing for efficient estimation of functions exhibiting nonuniform regularity at different locations and scales. We analyze the generalization error of our method by proving finite sample bounds in high probability on rich classes of priors. Our estimator attains optimal learning rates (up to logarithmic factors) as if the function was defined on a known Euclidean domain of dimension $d$, instead of an unknown manifold embedded in $mathbb{R}^D$. The implemented algorithm has quasilinear complexity in the sample size, with constants linear in $D$ and exponential in $d$. Our work therefore establishes a new framework for regression on low-dimensional sets embedded in high dimensions, with fast implementation and strong theoretical guarantees.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا