No Arabic abstract
In this paper we present a novel adaptive deep density approximation strategy based on KRnet (ADDA-KR) for solving the steady-state Fokker-Planck equation. It is known that this equation typically has high-dimensional spatial variables posed on unbounded domains, which limit the application of traditional grid based numerical methods. With the Knothe-Rosenblatt rearrangement, our newly proposed flow-based generative model, called KRnet, provides a family of probability density functions to serve as effective solution candidates of the Fokker-Planck equation, which have weaker dependence on dimensionality than traditional computational approaches. To result in effective stochastic collocation points for training KRnet, we develop an adaptive sampling procedure, where samples are generated iteratively using KRnet at each iteration. In addition, we give a detailed discussion of KRnet and show that it can efficiently estimate general high-dimensional density functions. We present a general mathematical framework of ADDA-KR, validate its accuracy and demonstrate its efficiency with numerical experiments.
We obtain exact results for fractional equations of Fokker-Planck type using evolution operator method. We employ exact forms of one-sided Levy stable distributions to generate a set of self-reproducing solutions. Explicit cases are reported and studied for various fractional order of derivatives, different initial conditions, and for differe
In this paper, we develop an adaptive finite element method for the nonlinear steady-state Poisson-Nernst-Planck equations, where the spatial adaptivity for geometrical singularities and boundary layer effects are mainly considered. As a key contribution, the steady-state Poisson-Nernst-Planck equations are studied systematically and rigorous analysis for a residual-based a posteriori error estimate of the nonlinear system is presented. With the help of Schauder fixed point theorem, we show the solution existence and uniqueness of the linearized system derived by taking $G-$derivatives of the nonlinear system, followed by the proof of the relationship between the error of solution and the a posteriori error estimator $eta$. Numerical experiments are given to validate the efficiency of the a posteriori error estimator and demonstrate the expected rate of convergence. In the further tests, adaptive mesh refinements for geometrical singularities and boundary layer effects are successfully observed.
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
We propose a new semi-discretization scheme to approximate nonlinear Fokker-Planck equations, by exploiting the gradient flow structures with respect to the 2-Wasserstein metric. We discretize the underlying state by a finite graph and define a discrete 2-Wasserstein metric. Based on such metric, we introduce a dynamical system, which is a gradient flow of the discrete free energy. We prove that the new scheme maintains dissipativity of the free energy and converges to a discrete Gibbs measure at exponential (dissipation) rate. We exhibit these properties on several numerical examples.
Distance metric learning (DML) approaches learn a transformation to a representation space where distance is in correspondence with a predefined notion of similarity. While such models offer a number of compelling benefits, it has been difficult for these to compete with modern classification algorithms in performance and even in feature extraction. In this work, we propose a novel approach explicitly designed to address a number of subtle yet important issues which have stymied earlier DML algorithms. It maintains an explicit model of the distributions of the different classes in representation space. It then employs this knowledge to adaptively assess similarity, and achieve local discrimination by penalizing class distribution overlap. We demonstrate the effectiveness of this idea on several tasks. Our approach achieves state-of-the-art classification results on a number of fine-grained visual recognition datasets, surpassing the standard softmax classifier and outperforming triplet loss by a relative margin of 30-40%. In terms of computational performance, it alleviates training inefficiencies in the traditional triplet loss, reaching the same error in 5-30 times fewer iterations. Beyond classification, we further validate the saliency of the learnt representations via their attribute concentration and hierarchy recovery properties, achieving 10-25% relative gains on the softmax classifier and 25-50% on triplet loss in these tasks.