ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximation Bounds for Random Neural Networks and Reservoir Systems

152   0   0.0 ( 0 )
 نشر من قبل Juan-Pablo Ortega
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights. These methods, in which only the last layer of weights and a few hyperparameters are optimized, have been successfully applied in a wide range of static and dynamic learning problems. Despite the popularity of this approach in empirical tasks, important theoretical questions regarding the relation between the unknown function, the weight distribution, and the approximation rate have remained open. In this work it is proved that, as long as the unknown function, functional, or dynamical system is sufficiently regular, it is possible to draw the internal weights of the random (recurrent) neural network from a generic distribution (not depending on the unknown object) and quantify the error in terms of the number of neurons and the hyperparameters. In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well and thus provides the first mathematical explanation for their empirically observed success at learning dynamical systems.



قيم البحث

اقرأ أيضاً

We study the expressivity of deep neural networks. Measuring a networks complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complex ity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of skip connections does not change the resulting approximation spaces. We also discuss the role of the networks nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.
Several recent works have shown separation results between deep neural networks, and hypothesis classes with inferior approximation capacity such as shallow networks or kernel classes. On the other hand, the fact that deep networks can efficiently ex press a target function does not mean that this target function can be learned efficiently by deep neural networks. In this work we study the intricate connection between learnability and approximation capacity. We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target. Specifically, we show that a necessary condition for a function to be learnable by gradient descent on deep neural networks is to be able to approximate the function, at least in a weak sense, with shallow neural networks. We also show that a class of functions can be learned by an efficient statistical query algorithm if and only if it can be approximated in a weak sense by some kernel class. We give several examples of functions which demonstrate depth separation, and conclude that they cannot be efficiently learned, even by a hypothesis class that can efficiently approximate them.
We prove non-asymptotic stretched exponential tail bounds on the height of a randomly sampled node in a random combinatorial tree, which we use to prove bounds on the heights and widths of random trees from a variety of models. Our results allow us t o prove a conjecture and settle an open problem of Janson (https://doi.org/10.1214/11-PS188), and nearly prove another conjecture and settle another open problem from the same work (up to a polylogarithmic factor). The key tool for our work is an equivalence in law between the degrees along the path to a random node in a random tree with given degree statistics, and a random truncation of a size-biased ordering of the degrees of such a tree. We also exploit a Poissonization trick introduced by Camarri and Pitman (https://doi.org/10.1214/EJP.v5-58) in the context of inhomogeneous continuum random trees, which we adapt to the setting of random trees with fixed degrees. Finally, we propose and justify a change to the conventions of branching process nomenclature: the name Galton-Watson trees should be permanently retired by the community, and replaced with the name Bienayme trees.
In this paper we investigate the usage of machine learning for interpreting measured sensor values in sensor modules. In particular we analyze the potential of artificial neural networks (ANNs) on low-cost micro-controllers with a few kilobytes of me mory to semantically enrich data captured by sensors. The focus is on classifying temporal data series with a high level of reliability. Design and implementation of ANNs are analyzed considering Feed Forward Neural Networks (FFNNs) and Recurrent Neural Networks (RNNs). We validate the developed ANNs in a case study of optical hand gesture recognition on an 8-bit micro-controller. The best reliability was found for an FFNN with two layers and 1493 parameters requiring an execution time of 36 ms. We propose a workflow to develop ANNs for embedded devices.
Recognizing the patients emotions using deep learning techniques has attracted significant attention recently due to technological advancements. Automatically identifying the emotions can help build smart healthcare centers that can detect depression and stress among the patients in order to start the medication early. Using advanced technology to identify emotions is one of the most exciting topics as it defines the relationships between humans and machines. Machines learned how to predict emotions by adopting various methods. In this survey, we present recent research in the field of using neural networks to recognize emotions. We focus on studying emotions recognition from speech, facial expressions, and audio-visual input and show the different techniques of deploying these algorithms in the real world. These three emotion recognition techniques can be used as a surveillance system in healthcare centers to monitor patients. We conclude the survey with a presentation of the challenges and the related future work to provide an insight into the applications of using emotion recognition.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا