No Arabic abstract
We considered the generalization of Einsteins model of Brownian motion when the key parameter of the time interval of free jumps degenerates. This phenomenon manifests in two scenarios: a) flow of the fluid, which is highly dispersing like a non-dense gas, and b) flow of fluid far away from the source of flow, when the velocity of the flow is incomparably smaller than the gradient of the pressure. First, we will show that both types of flows can be modeled using the Einstein paradigm. We will investigate the question: What features will particle flow exhibit if the time interval of the free jump is inverse proportional to the density of the fluid and its gradient ? We will show that in this scenario, the flow exhibits localization property, namely: if at some moment of time $t_{0}$ in the region gradient of the pressure or pressure itself is equal to zero, then for some time T during t interval $[ t_{0}, t_0+T ]$ there is no flow in the region. This directly links to Barenblatts finite speed of propagation property for the degenerate equation. The method of proof is very different from Barenblatts method and based on Vespri - Tedeev technique.
We employ a generalization of Einsteins random walk paradigm for diffusion to derive a class of multidimensional degenerate nonlinear parabolic equations in non-divergence form. Specifically, in these equations, the diffusion coefficient can depend on both the dependent variable and its gradient, and it vanishes when either one of the latter does. It is known that solution of such degenerate equations can exhibit finite speed of propagation (so-called localization property of solutions). We give a proof of this property using a De Giorgi--Ladyzhenskaya iteration procedure for non-divergence-from equations. A mapping theorem is then established to a divergence-form version of the governing equation for the case of one spatial dimension. Numerical results via a finite-difference scheme are used to illustrate the main mathematical results for this special case. For completeness, we also provide an explicit construction of the one-dimensional self-similar solution with finite speed of propagation function, in the sense of Kompaneets--Zeldovich--Barenblatt. We thus show how the finite speed of propagation quantitatively depends on the models parameters.
In this paper, we prove the Girsanov formula for $G$-Brownian motion without the non-degenerate condition. The proof is based on the perturbation method in the nonlinear setting by constructing a product space of the $G$-expectation space and a linear space that contains a standard Brownian motion. The estimates for exponential martingale of $G$-Brownian motion are important for our arguments.
Diffusive transport in many complex systems features a crossover between anomalous diffusion at short times and normal diffusion at long times. This behavior can be mathematically modeled by cutting off (tempering) beyond a mesoscopic correlation time the power-law correlations between the increments of fractional Brownian motion. Here, we investigate such tempered fractional Brownian motion confined to a finite interval by reflecting walls. Specifically, we explore how the tempering of the long-time correlations affects the strong accumulation and depletion of particles near reflecting boundaries recently discovered for untempered fractional Brownian motion. We find that exponential tempering introduces a characteristic size for the accumulation and depletion zones but does not affect the functional form of the probability density close to the wall. In contrast, power-law tempering leads to more complex behavior that differs between the superdiffusive and subdiffusive cases.
Transformer is the state of the art model for many language and visual tasks. In this paper, we give a deep analysis of its multi-head self-attention (MHSA) module and find that: 1) Each token is a random variable in high dimensional feature space. 2) After layer normalization, these variables are mapped to points on the hyper-sphere. 3) The update of these tokens is a Brownian motion. The Brownian motion has special properties, its second order item should not be ignored. So we present a new second-order optimizer(an iterative K-FAC algorithm) for the MHSA module. In some short words: All tokens are mapped to high dimension hyper-sphere. The Scaled Dot-Product Attention $softmax(frac{mathbf{Q}mathbf{K}^T}{sqrt{d}})$ is just the Markov transition matrix for the random walking on the sphere. And the deep learning process would learn proper kernel function to get proper positions of these tokens. The training process in the MHSA module corresponds to a Brownian motion worthy of further study.
The theory of quantum Brownian motion describes the properties of a large class of open quantum systems. Nonetheless, its description in terms of a Born-Markov master equation, widely used in the literature, is known to violate the positivity of the density operator at very low temperatures. We study an extension of existing models, leading to an equation in the Lindblad form, which is free of this problem. We study the dynamics of the model, including the detailed properties of its stationary solution, for both constant and position-dependent coupling of the Brownian particle to the bath, focusing in particular on the correlations and the squeezing of the probability distribution induced by the environment