No Arabic abstract
We quantize a multidimensional $SDE$ (in the Stratonovich sense) by solving the related system of $ODE$s in which the $d$-dimensional Brownian motion has been replaced by the components of functional stationary quantizers. We make a connection with rough path theory to show that the solutions of the quantized solutions of the $ODE$ converge toward the solution of the $SDE$. On our way to this result we provide convergence rates of optimal quantizations toward the Brownian motion for $frac 1q$-H older distance, $q>2$, in $L^p(P)$.
Generative adversarial networks (GANs) have enjoyed tremendous empirical successes, and research interest in the theoretical understanding of GANs training process is rapidly growing, especially for its evolution and convergence analysis. This paper establishes approximations, with precise error bound analysis, for the training of GANs under stochastic gradient algorithms (SGAs). The approximations are in the form of coupled stochastic differential equations (SDEs). The analysis of the SDEs and the associated invariant measures yields conditions for the convergence of GANs training. Further analysis of the invariant measure for the coupled SDEs gives rise to a fluctuation-dissipation relations (FDRs) for GANs, revealing the trade-off of the loss landscape between the generator and the discriminator and providing guidance for learning rate scheduling.
In this paper, the strong solutions $ (X, L)$ of multidimensional stochastic differential equations with reflecting boundary and possible anticipating initial random variables is established. The key is to obtain some substitution formula for Stratonovich integrals via a uniform convergence of the corresponding Riemann sums and to prove continuity of functionals of $ (X, L)$.
Adversarial training has gained great popularity as one of the most effective defenses for deep neural networks against adversarial perturbations on data points. Consequently, research interests have grown in understanding the convergence and robustness of adversarial training. This paper considers the min-max game of adversarial training by alternating stochastic gradient descent. It approximates the training process with a continuous-time stochastic-differential-equation (SDE). In particular, the error bound and convergence analysis is established. This SDE framework allows direct comparison between adversarial training and stochastic gradient descent; and confirms analytically the robustness of adversarial training from a (new) gradient-flow viewpoint. This analysis is then corroborated via numerical studies. To demonstrate the versatility of this SDE framework for algorithm design and parameter tuning, a stochastic control problem is formulated for learning rate adjustment, where the advantage of adaptive learning rate over fixed learning rate in terms of training loss is demonstrated through numerical experiments.
In this paper we state and prove a central limit theorem for the finite-dimensional laws of the quadratic variations process of certain fractional Brownian sheets. The main tool of this article is a method developed by Nourdin and Nualart based on the Malliavin calculus.
In this paper, an optimal switching problem is proposed for one-dimensional reflected backward stochastic differential equations (RBSDEs, for short) where the generators, the terminal values and the barriers are all switched with positive costs. The value process is characterized by a system of multi-dimensional RBSDEs with oblique reflection, whose existence and uniqueness are by no means trivial and are therefore carefully examined. Existence is shown using both methods of the Picard iteration and penalization, but under some different conditions. Uniqueness is proved by representation either as the value process to our optimal switching problem for one-dimensional RBSDEs, or as the equilibrium value process to a stochastic differential game of switching and stopping. Finally, the switched RBSDE is interpreted as a real option.