ﻻ يوجد ملخص باللغة العربية
We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.
We study a phase transition in parameter learning of Hidden Markov Models (HMMs). We do this by generating sequences of observed symbols from given discrete HMMs with uniformly distributed transition probabilities and a noise level encoded in the out
Preferential attachment is a central paradigm in the theory of complex networks. In this contribution we consider various generalizations of preferential attachment including for example node removal and edge rewiring. We demonstrate that generalized
Machine learning (ML) and artificial intelligence (AI) have the remarkable ability to classify, recognize, and characterize complex patterns and trends in large data sets. Here, we adopt a subclass of machine learning methods viz., deep learnings and
Recent progress in the development of efficient computational algorithms to price financial derivatives is summarized. A first algorithm is based on a path integral approach to option pricing, while a second algorithm makes use of a neural network pa
This Letter presents a neural estimator for entropy production, or NEEP, that estimates entropy production (EP) from trajectories of relevant variables without detailed information on the system dynamics. For steady state, we rigorously prove that th