ترغب بنشر مسار تعليمي؟ اضغط هنا

A General Theory of Equivariant CNNs on Homogeneous Spaces

174   0   0.0 ( 0 )
 نشر من قبل Taco Cohen
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a general theory of Group equivariant Convolutional Neural Networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere. Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields. The theory enables a systematic classification of all existing G-CNNs in terms of their symmetry group, base space, and field type. We also consider a fundamental question: what is the most general kind of equivariant linear map between feature spaces (fields) of given types? Following Mackey, we show that such maps correspond one-to-one with convolutions using equivariant kernels, and characterize the space of such kernels.

قيم البحث

اقرأ أيضاً

We prove the conjectures of Graham-Kumar and Griffeth-Ram concerning the alternation of signs in the structure constants for torus-equivariant K-theory of generalized flag varieties G/P. These results are immediate consequences of an equivariant homo logical Kleiman transversality principle for the Borel mixing spaces of homogeneous spaces, and their subvarieties, under a natural group action with finitely many orbits. The computation of the coefficients in the expansion of the equivariant K-class of a subvariety in terms of Schubert classes is reduced to an Euler characteristic using the homological transversality theorem for non-transitive group actions due to S. Sierra. A vanishing theorem, when the subvariety has rational singularities, shows that the Euler characteristic is a sum of at most one term--the top one--with a well-defined sign. The vanishing is proved by suitably modifying a geometric argument due to M. Brion in ordinary K-theory that brings Kawamata-Viehweg vanishing to bear.
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs). Such GCNs utilize isotropic kernels and are therefore insensitive to the relative orientation of vertices and thus to th e geometry of the mesh as a whole. We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels. Since the resulting features carry orientation information, we introduce a geometric message passing scheme defined by parallel transporting features over mesh edges. Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
In many machine learning tasks it is desirable that a models prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transfor mations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping. In addition, we generalize Hes weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Numerical experiments show a substantial enhancement of the sample complexity with a growing number of sampled filter orientations and confirm that the network generalizes learned patterns over orientations. The proposed approach achieves state-of-the-art on the rotated MNIST benchmark and on the ISBI 2012 2D EM segmentation challenge.
Deep learnings success has been widely recognized in a variety of machine learning tasks, including image classification, audio recognition, and natural language processing. As an extension of deep learning beyond these domains, graph neural networks (GNNs) are designed to handle the non-Euclidean graph-structure which is intractable to previous deep learning techniques. Existing GNNs are presented using various techniques, making direct comparison and cross-reference more complex. Although existing studies categorize GNNs into spatial-based and spectral-based techniques, there hasnt been a thorough examination of their relationship. To close this gap, this study presents a single framework that systematically incorporates most GNNs. We organize existing GNNs into spatial and spectral domains, as well as expose the connections within each domain. A review of spectral graph theory and approximation theory builds a strong relationship across the spatial and spectral domains in further investigation.
We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)- equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R^3. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا