ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Networks, Hypersurfaces, and Radon Transforms

348   0   0.0 ( 0 )
 نشر من قبل Soheil Kolouri
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Connections between integration along hypersufaces, Radon transforms, and neural networks are exploited to highlight an integral geometric mathematical interpretation of neural networks. By analyzing the properties of neural networks as operators on probability distributions for observed data, we show that the distribution of outputs for any node in a neural network can be interpreted as a nonlinear projection along hypersurfaces defined by level surfaces over the input data space. We utilize these descriptions to provide new interpretation for phenomena such as nonlinearity, pooling, activation functions, and adversarial examples in neural network-based learning problems.

قيم البحث

اقرأ أيضاً

The scattering transform is a multilayered wavelet-based deep learning architecture that acts as a model of convolutional neural networks. Recently, several works have introduced generalizations of the scattering transform for non-Euclidean settings such as graphs. Our work builds upon these constructions by introducing windowed and non-windowed graph scattering transforms based upon a very general class of asymmetric wavelets. We show that these asymmetric graph scattering transforms have many of the same theoretical guarantees as their symmetric counterparts. This work helps bridge the gap between scattering and other graph neural networks by introducing a large family of networks with provable stability and invariance guarantees. This lays the groundwork for future deep learning architectures for graph-structured data that have learned filters and also provably have desirable theoretical properties.
Neural Tangents is a library designed to enable research into infinite-width neural networks. It provides a high-level API for specifying complex and hierarchical neural network architectures. These networks can then be trained and evaluated either a t finite-width as usual or in their infinite-width limit. Infinite-width networks can be trained analytically using exact Bayesian inference or using gradient descent via the Neural Tangent Kernel. Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks in either function space or weight space. The entire library runs out-of-the-box on CPU, GPU, or TPU. All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. Neural Tangents is available at www.github.com/google/neural-tangents. We also provide an accompanying interactive Colab notebook.
We study the different horospherical Radon transforms that arise by regarding a homogeneous tree T as a simplicial complex whose simplices are vertices V, edges E or flags F (flags are oriented edges). The ends (infinite geodesic rays starting at a r eference vertex) provide a boundary $Omega$ for the tree. Then the horospheres form a trivial principal fiber bundle with base $Omega$ and fiber $mathZ$. There are three such fiber bundles, consisting of horospheres of vertices, edges or flags, but they are isomorphic: however, no isomorphism between these fiber bundles maps special sections to special sections (a special section consists of the set of horospheres through a given vertex, edge or flag). The groups of automorphisms of the fiber bundles contain a subgroup $A$ of parallel shifts, analogous to the Cartan subgroup of a semisimple group. The normalized eigenfunctions of the Laplace operator on T are boundary integrals of complex powers of the Poisson kernel, that is characters of $A$, and are matrix coefficients of representations induced from $A$ in the sense of Mackey, the so-called spherical representations. The vertex-horospherical Radon transform consists of summation over V in each vertex-horosphere, and similarly for edges or flags. We prove inversion formulas for all these Radon transforms, and give applications to harmonic analysis and the Plancherel measure on T. We show via integral geometry that the spherical representations for vertices and edges are equivalent. Also, we define the Radon back-projections and find the inversion operator of each Radon transform by composing it with its back-projection. This gives rise to a convolution operator on T, whose symbol is obtained via the spherical Fourier transform, and its reciprocal is the symbol of the Radon inversion formula.
Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence. Nonetheless, popular tasks such as speech or images reco gnition, involve multi-dimensional input features that are characterized by strong internal dependencies between the dimensions of the input vector. We propose a novel quaternion recurrent neural network (QRNN), alongside with a quaternion long-short term memory neural network (QLSTM), that take into account both the external relations and these internal structural dependencies with the quaternion algebra. Similarly to capsules, quaternions allow the QRNN to code internal dependencies by composing and processing multidimensional features as single entities, while the recurrent operation reveals correlations between the elements composing the sequence. We show that both QRNN and QLSTM achieve better performances than RNN and LSTM in a realistic application of automatic speech recognition. Finally, we show that QRNN and QLSTM reduce by a maximum factor of 3.3x the number of free parameters needed, compared to real-valued RNNs and LSTMs to reach better results, leading to a more compact representation of the relevant information.
We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states. Specifically, we consider RNNs that can be viewed as discretizations of stochastic differential equations driven by input data . This framework allows us to study the implicit regularization effect of general noise injection schemes by deriving an approximate explicit regularizer in the small noise regime. We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin. Sufficient conditions for global stability are obtained, highlighting the phenomenon of stochastic stabilization, where noise injection can improve stability during training. Our theory is supported by empirical results which demonstrate improved robustness with respect to various input perturbations, while maintaining state-of-the-art performance.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا