ﻻ يوجد ملخص باللغة العربية
This paper introduces neuroevolution for solving differential equations. The solution is obtained through optimizing a deep neural network whose loss function is defined by the residual terms from the differential equations. Recent studies have focused on learning such physics-informed neural networks through stochastic gradient descent (SGD) variants, yet they face the difficulty of obtaining an accurate solution due to optimization challenges. In the context of solving differential equations, we are faced with the problem of finding globally optimum parameters of the network, instead of being concerned with out-of-sample generalization. SGD, which searches along a single gradient direction, is prone to become trapped in local optima, so it may not be the best approach here. In contrast, neuroevolution carries out a parallel exploration of diverse solutions with the goal of circumventing local optima. It could potentially find more accurate solutions with better optimized neural networks. However, neuroevolution can be slow, raising tractability issues in practice. With that in mind, a novel and computationally efficient transfer neuroevolution algorithm is proposed in this paper. Our method is capable of exploiting relevant experiential priors when solving a new problem, with adaptation to protect against the risk of negative transfer. The algorithm is applied on a variety of differential equations to empirically demonstrate that transfer neuroevolution can indeed achieve better accuracy and faster convergence than SGD. The experimental outcomes thus establish transfer neuroevolution as a noteworthy approach for solving differential equations, one that has never been studied in the past. Our work expands the resource of available algorithms for optimizing physics-informed neural networks.
We study large networks of parametric oscillators as heuristic solvers of random Ising models. In these networks, known as coherent Ising machines, the model to be solved is encoded in the coupling between the oscillators, and a solution is offered b
We propose a physical analogy between finding the solution of an ordinary differential equation (ODE) and a $N$ particle problem in statistical mechanics. It uses the fact that the solution of an ODE is equivalent to obtain the minimum of a functiona
Differential equations parameterized by neural networks become expensive to solve numerically as training progresses. We propose a remedy that encourages learned dynamics to be easier to solve. Specifically, we introduce a differentiable surrogate fo
The structure and performance of neural networks are intimately connected, and by use of evolutionary algorithms, neural network structures optimally adapted to a given task can be explored. Guiding such neuroevolution with additional objectives rela
The iteration sequence based on the BLUES (Beyond Linear Use of Equation Superposition) function method for calculating analytic approximants to solutions of nonlinear ordinary differential equations with sources is elaborated upon. Diverse problems