ترغب بنشر مسار تعليمي؟ اضغط هنا

From complex to simple : hierarchical free-energy landscape renormalized in deep neural networks

57   0   0.0 ( 0 )
 نشر من قبل Hajime Yoshino
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Hajime Yoshino




اسأل ChatGPT حول البحث

We develop a statistical mechanical approach based on the replica method to study the design space of deep and wide neural networks constrained to meet a large number of training data. Specifically, we analyze the configuration space of the synaptic weights and neurons in the hidden layers in a simple feed-forward perceptron network for two scenarios: a setting with random inputs/outputs and a teacher-student setting. By increasing the strength of constraints,~i.e. increasing the number of training data, successive 2nd order glass transition (random inputs/outputs) or 2nd order crystalline transition (teacher-student setting) take place layer-by-layer starting next to the inputs/outputs boundaries going deeper into the bulk with the thickness of the solid phase growing logarithmically with the data size. This implies the typical storage capacity of the network grows exponentially fast with the depth. In a deep enough network, the central part remains in the liquid phase. We argue that in systems of finite width N, the weak bias field can remain in the center and plays the role of a symmetry-breaking field that connects the opposite sides of the system. The successive glass transitions bring about a hierarchical free-energy landscape with ultrametricity, which evolves in space: it is most complex close to the boundaries but becomes renormalized into progressively simpler ones in deeper layers. These observations provide clues to understand why deep neural networks operate efficiently. Finally, we present some numerical simulations of learning which reveal spatially heterogeneous glassy dynamics truncated by a finite width $N$ effect.



قيم البحث

اقرأ أيضاً

Recent advances in deep learning and neural networks have led to an increased interest in the application of generative models in statistical and condensed matter physics. In particular, restricted Boltzmann machines (RBMs) and variational autoencode rs (VAEs) as specific classes of neural networks have been successfully applied in the context of physical feature extraction and representation learning. Despite these successes, however, there is only limited understanding of their representational properties and limitations. To better understand the representational characteristics of RBMs and VAEs, we study their ability to capture physical features of the Ising model at different temperatures. This approach allows us to quantitatively assess learned representations by comparing sample features with corresponding theoretical predictions. Our results suggest that the considered RBMs and convolutional VAEs are able to capture the temperature dependence of magnetization, energy, and spin-spin correlations. The samples generated by RBMs are more evenly distributed across temperature than those generated by VAEs. We also find that convolutional layers in VAEs are important to model spin correlations whereas RBMs achieve similar or even better performances without convolutional filters.
A recent 3-XORSAT challenge required to minimize a very complex and rough energy function, typical of glassy models with a random first order transition and a golf course like energy landscape. We present the ideas beyond the quasi-greedy algorithm a nd its very efficient implementation on GPUs that are allowing us to rank first in such a competition. We suggest a better protocol to compare algorithmic performances and we also provide analytical predictions about the exponential growth of the times to find the solution in terms of free-energy barriers.
In this chapter we discuss how the results developed within the theory of fractals and Self-Organized Criticality (SOC) can be fruitfully exploited as ingredients of adaptive network models. In order to maintain the presentation self-contained, we fi rst review the basic ideas behind fractal theory and SOC. We then briefly review some results in the field of complex networks, and some of the models that have been proposed. Finally, we present a self-organized model recently proposed by Garlaschelli et al. [Nat. Phys. 3, 813 (2007)] that couples the fitness network model defined by Caldarelli et al. [Phys. Rev. Lett. 89, 258702 (2002)] with the evolution model proposed by Bak and Sneppen [Phys. Rev. Lett. 71, 4083 (1993)] as a prototype of SOC. Remarkably, we show that the results obtained for the two models separately change dramatically when they are coupled together. This indicates that self-organized networks may represent an entirely novel class of complex systems, whose properties cannot be straightforwardly understood in terms of what we have learnt so far.
226 - Ginestra Bianconi 2008
We derive the spectral properties of adjacency matrix of complex networks and of their Laplacian by the replica method combined with a dynamical population algorithm. By assuming the order parameter to be a product of Gaussian distributions, the pres ent theory provides a solution for the non linear integral equations for the spectra density in random matrix theory of the spectra of sparse random matrices making a step forward with respect to the effective medium approximation (EMA) . We extend these results also to weighted networks with weight-degree correlations
We carefully investigate the two fundamental assumptions in the Stillinger-Weber analysis of the inherent structures (ISs) in the energy landscape and come to conclude that they cannot be validated. This explains some of the conflicting results betwe en their conclusions and some recent rigorous and exact results. Our analysis shows that basin free energies, and not ISs, are useful for understanding glasses.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا