ﻻ يوجد ملخص باللغة العربية
Classical gradient systems have a linear relation between rates and driving forces. In generalized gradient systems we allow for arbitrary relations derived from general non-quadratic dissipation potentials. This paper describes two natural origins for these structures. A first microscopic origin of generalized gradient structures is given by the theory of large-deviation principles. While Markovian diffusion processes lead to classical gradient structures, Poissonian jump processes give rise to cosh-type dissipation potentials. A second origin arises via a new form of convergence, that we call EDP-convergence. Even when starting with classical gradient systems, where the dissipation potential is a quadratic functional of the rate, we may obtain a generalized gradient system in the evolutionary $Gamma$-limit. As examples we treat (i) the limit of a diffusion equation having a thin layer of low diffusivity, which leads to a membrane model, and (ii) the limit of diffusion over a high barrier, which gives a reaction-diffusion system.
We have created a functional framework for a class of non-metric gradient systems. The state space is a space of nonnegative measures, and the class of systems includes the Forward Kolmogorov equations for the laws of Markov jump processes on Polish
This article is mostly based on a talk I gave at the March 2021 meeting (virtual) of the American Physical Society on the occasion of receiving the Dannie Heineman prize for Mathematical Physics from the American Institute of Physics and the American
In this paper we present a variational technique that handles coarse-graining and passing to a limit in a unified manner. The technique is based on a duality structure, which is present in many gradient flows and other variational evolutions, and whi
In this paper we introduce a general abstract formulation of a variational thermomechanical model, by means of a unified derivation via a generalization of the principle of virtual powers for all the variables of the system, including the thermal one
We apply the generalized conditional gradient algorithm to potential mean field games and we show its well-posedeness. It turns out that this method can be interpreted as a learning method called fictitious play. More precisely, each step of the gene