ﻻ يوجد ملخص باللغة العربية
In this paper we present a variational technique that handles coarse-graining and passing to a limit in a unified manner. The technique is based on a duality structure, which is present in many gradient flows and other variational evolutions, and which often arises from a large-deviations principle. It has three main features: (A) a natural interaction between the duality structure and the coarse-graining, (B) application to systems with non-dissipative effects, and (C) application to coarse-graining of approximate solutions which solve the equation only to some error. As examples, we use this technique to solve three limit problems, the overdamped limit of the Vlasov-Fokker-Planck equation and the small-noise limit of randomly perturbed Hamiltonian systems with one and with many degrees of freedom.
We have created a functional framework for a class of non-metric gradient systems. The state space is a space of nonnegative measures, and the class of systems includes the Forward Kolmogorov equations for the laws of Markov jump processes on Polish
This paper presents a variational approach to doubly-nonlinear (gradient) flows (P) of nonconvex energies along with nonpotential perturbations (i.e., perturbation terms without any potential structures). An elliptic-in-time regularization of the ori
We propose and illustrate an approach to coarse-graining the dynamics of evolving networks (networks whose connectivity changes dynamically). The approach is based on the equation-free framework: short bursts of detailed network evolution simulations
We propose a variational form of the BDF2 method as an alternative to the commonly used minimizing movement scheme for the time-discrete approximation of gradient flows in abstract metric spaces. Assuming uniform semi-convexity --- but no smoothness
This work is concerned with model reduction of stochastic differential equations and builds on the idea of replacing drift and noise coefficients of preselected relevant, e.g. slow variables by their conditional expectations. We extend recent results