ترغب بنشر مسار تعليمي؟ اضغط هنا

A statistical approximation to solve ordinary differential equations

102   0   0.0 ( 0 )
 نشر من قبل Mariano Febbo mfebbo
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a physical analogy between finding the solution of an ordinary differential equation (ODE) and a $N$ particle problem in statistical mechanics. It uses the fact that the solution of an ODE is equivalent to obtain the minimum of a functional. Then, we link these two notions, proposing this functional to be the interaction potential energy or thermodynamic potential of an equivalent particle problem. Therefore, solving this statistical mechanics problem amounts to solve the ODE. If only one solution exists, our method provides the unique solution of the ODE. In case we treat an eigenvalue equation, where infinite solutions exist, we obtain the absolute minimum of the corresponding functional or fundamental mode. As a result, it is possible to establish a general relationship between statistical mechanics and ODEs which allows not only to solve them from a physical perspective but also to obtain all relevant thermodynamical equilibrium variables of that particle system related to the differential equation.

قيم البحث

اقرأ أيضاً

145 - Fei Wei , Huazhong Yang 2009
Waveform Relaxation method (WR) is a beautiful algorithm to solve Ordinary Differential Equations (ODEs). However, because of its poor convergence capability, it was rarely used. In this paper, we propose a new distributed algorithm, named Waveform T ransmission Method (WTM), by virtually inserting waveform transmission lines into the dynamical system to achieve distributed computing of extremely large ODEs. WTM has better convergence capability than the traditional WR algorithms.
We discuss the great importance of using mathematical software in solving problems in todays society. In particular, we show how to use Mathematica software to solve ordinary differential equations exactly and numerically. We also show how to represe nt these solutions graphically. We treat the particular case of a charged particle subject to an oscillating electric field in the xy plane and a constant magnetic field. We show how to construct the equations of motion, defined by the vectors position, velocity, electric and magnetic fields. We show how to solve these equations of Lorentz force, and graphically represent the possible trajectories. We end by showing how to build a video simulation for an oscillating electric field trajectory particle case.
We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box di fferential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
Differential equations parameterized by neural networks become expensive to solve numerically as training progresses. We propose a remedy that encourages learned dynamics to be easier to solve. Specifically, we introduce a differentiable surrogate fo r the time cost of standard numerical solvers, using higher-order derivatives of solution trajectories. These derivatives are efficient to compute with Taylor-mode automatic differentiation. Optimizing this additional objective trades model performance against the time cost of solving the learned dynamics. We demonstrate our approach by training substantially faster, while nearly as accurate, models in supervised classification, density estimation, and time-series modelling tasks.
99 - Suyong Kim , Weiqi Ji , Sili Deng 2021
Neural Ordinary Differential Equations (ODE) are a promising approach to learn dynamic models from time-series data in science and engineering applications. This work aims at learning Neural ODE for stiff systems, which are usually raised from chemic al kinetic modeling in chemical and biological systems. We first show the challenges of learning neural ODE in the classical stiff ODE systems of Robertsons problem and propose techniques to mitigate the challenges associated with scale separations in stiff systems. We then present successful demonstrations in stiff systems of Robertsons problem and an air pollution problem. The demonstrations show that the usage of deep networks with rectified activations, proper scaling of the network outputs as well as loss functions, and stabilized gradient calculations are the key techniques enabling the learning of stiff neural ODE. The success of learning stiff neural ODE opens up possibilities of using neural ODEs in applications with widely varying time-scales, like chemical dynamics in energy conversion, environmental engineering, and the life sciences.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا