ترغب بنشر مسار تعليمي؟ اضغط هنا

Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds

81   0   0.0 ( 0 )
 نشر من قبل Matthew Hirn
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of convolutional neural networks. Inspired by recent interest in geometric deep learning, which aims to generalize convolutional neural networks to manifold and graph-structured domains, we define a geometric scattering transform on manifolds. Similar to the Euclidean scattering transform, the geometric scattering transform is based on a cascade of wavelet filters and pointwise nonlinearities. It is invariant to local isometries and stable to certain types of diffeomorphisms. Empirical results demonstrate its utility on several geometric learning tasks. Our results generalize the deformation stability and local translation invariance of Euclidean scattering, and demonstrate the importance of linking the used filter structures to the underlying geometry of the data.



قيم البحث

اقرأ أيضاً

The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of the success of convolutional neural networks (ConvNets) in image data analysis and other tasks. Inspired by recent interest in geometri c deep learning, which aims to generalize ConvNets to manifold and graph-structured domains, we generalize the scattering transform to compact manifolds. Similar to the Euclidean scattering transform, our geometric scattering transform is based on a cascade of designed filters and pointwise nonlinearities, which enables rigorous analysis of the feature extraction provided by scattering layers. Our main focus here is on theoretical understanding of this geometric scattering network, while setting aside implementation aspects, although we remark that application of similar transforms to graph data analysis has been studied recently in related work. Our results establish conditions under which geometric scattering provides localized isometry invariant descriptions of manifold signals, which are also stable to families of diffeomorphisms formulated in intrinsic manifolds terms. These results not only generalize the deformation stability and local roto-translation invariance of Euclidean scattering, but also demonstrate the importance of linking the used filter structures (e.g., in geometric deep learning) to the underlying manifold geometry, or the data geometry it represents.
We study the propagator of the wave equation on a closed Riemannian manifold $M$. We propose a geometric approach to the construction of the propagator as a single oscillatory integral global both in space and in time with a distinguished complex-val ued phase function. This enables us to provide a global invariant definition of the full symbol of the propagator - a scalar function on the cotangent bundle - and an algorithm for the explicit calculation of its homogeneous components. The central part of the paper is devoted to the detailed analysis of the subprincipal symbol; in particular, we derive its explicit small time asymptotic expansion. We present a general geometric construction that allows one to visualise topological obstructions and describe their circumvention with the use of a complex-valued phase function. We illustrate the general framework with explicit examples in dimension two.
The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that comb ines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.
In this paper we develop the theory of parametric polynomial regression in Riemannian manifolds and Lie groups. We show application of Riemannian polynomial regression to shape analysis in Kendall shape space. Results are presented, showing the power of polynomial regression on the classic rat skull growth data of Bookstein as well as the analysis of the shape changes associated with aging of the corpus callosum from the OASIS Alzheimers study.
Convolutional Neural Networks (CNNs) have been applied to data with underlying non-Euclidean structures and have achieved impressive successes. This brings the stability analysis of CNNs on non-Euclidean domains into notice because CNNs have been pro ved stable on Euclidean domains. This paper focuses on the stability of CNNs on Riemannian manifolds. By taking the Laplace-Beltrami operators into consideration, we construct an $alpha$-frequency difference threshold filter to help separate the spectrum of the operator with an infinite dimensionality. We further construct a manifold neural network architecture with these filters. We prove that both the manifold filters and neural networks are stable under absolute perturbations to the operators. The results also implicate a trade-off between the stability and discriminability of manifold neural networks. Finally we verify our conclusions with numerical experiments in a wireless adhoc network scenario.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا