We introduce a novel approach to obtaining mathematically rigorous results on the global dynamics of ordinary differential equations. Motivated by models of regulatory networks, we construct a state transition graph from a piecewise affine ordinary differential equation. We use efficient graph algorithms to compute an associated Morse graph that codifies the recurrent and gradient-like dynamics. We prove that for 2-dimensional systems, the Morse graph defines a Morse decomposition for the dynamics of any smooth differential equation that is sufficiently close to the original piecewise affine ordinary differential equation.
In this paper we investigate equilibria of continuous differential equation models of network dynamics. The motivation comes from gene regulatory networks where each directed edge represents either down- or up-regulation, and is modeled by a sigmoidal nonlinear function. We show that the existence and stability of equilibria of a sigmoidal system is determined by a combinatorial analysis of the limiting switching system with piece-wise constant non-linearities. In addition, we describe a local decomposition of a switching system into a product of simpler cyclic feedback systems, where the cycles in each decomposition correspond to a particular subset of network loops.
We show that every compact, unitary two-dimensional CFT with an abelian conserved current has vanishing twist gap for charged primary fields with respect to the $mathfrak{u}(1)times$Virasoro algebra. This means that either the chiral algebra is enhanced by a charged primary field with zero twist or there is an infinite family of charged primary fields that accumulate to zero twist.
In this his paper, we studied the global dynamics of a two-strain flu model with a single-strain vaccine and general incidence rate. Four equilibrium points were obtained and the global dynamics of the model are completely determined via suitable lyapunov functions. We illustrate our results by some numerical simulations.
We study the problem of preservation of canard connections for time discretized fast-slow systems with canard fold points. In order to ensure such preservation, certain favorable structure preserving properties of the discretization scheme are required. Conventional schemes do not possess such properties. We perform a detailed analysis for an unconventional discretization scheme due to Kahan. The analysis uses the blow-up method to deal with the loss of normal hyperbolicity at the canard point. We show that the structure preserving properties of the Kahan discretization imply a similar result as in continuous time, guaranteeing the occurrence of canard connections between attracting and repelling slow manifolds upon variation of a bifurcation parameter. The proof is based on a non-canonical Melnikov computation along an invariant separating curve, which organizes the dynamics of the map similarly to the ODE problem.
Deep learning models evolve through training to learn the manifold in which the data exists to satisfy an objective. It is well known that evolution leads to different final states which produce inconsistent predictions of the same test data points. This calls for techniques to be able to empirically quantify the difference in the trajectories and highlight problematic regions. While much focus is placed on discovering what models learn, the question of how a model learns is less studied beyond theoretical landscape characterizations and local geometric approximations near optimal conditions. Here, we present a toolkit for the Dynamical Organization Of Deep Learning Loss Landscapes, or DOODL3. DOODL3 formulates the training of neural networks as a dynamical system, analyzes the learning process, and presents an interpretable global view of trajectories in the loss landscape. Our approach uses the coarseness of topology to capture the granularity of geometry to mitigate against states of instability or elongated training. Overall, our analysis presents an empirical framework to extract the global dynamics of a model and to use that information to guide the training of neural networks.