We survey an area of recent development, relating dynamics to theoretical computer science. We discuss the theoretical limits of simulation and computation of interesting quantities in dynamical systems. We will focus on central objects of the theory of dynamics, as invariant measures and invariant sets, showing that even if they can be computed with arbitrary precision in many interesting cases, there exists some cases in which they can not. We also explain how it is possible to compute the speed of convergence of ergodic averages (when the system is known exactly) and how this entails the computation of arbitrarily good approximations of points of the space having typical statistical behaviour (a sort of constructive version of the pointwise ergodic theorem).
Toric dynamical systems are known as complex balancing mass action systems in the mathematical chemistry literature, where many of their remarkable properties have been established. They include as special cases all deficiency zero systems and all detailed balancing systems. One feature is that the steady state locus of a toric dynamical system is a toric variety, which has a unique point within each invariant polyhedron. We develop the basic theory of toric dynamical systems in the context of computational algebraic geometry and show that the associated moduli space is also a toric variety. It is conjectured that the complex balancing state is a global attractor. We prove this for detailed balancing systems whose invariant polyhedron is two-dimensional and bounded.
We study two coupled discrete-time equations with different (asynchronous) periodic time scales. The coupling is of the type sample and hold, i.e., the state of each equation is sampled at its update times and held until it is read as an input at the next update time for the other equation. We construct an interpolating two-dimensional complex-valued system on the union of the two time scales and an extrapolating four-dimensional system on the intersection of the two time scales. We discuss stability by several results, examples and counterexamples in various frameworks to show that the asynchronicity can have a significant impact on the dynamical properties.
We give an algorithm, based on the $phi$-expansion of Parry, in order to compute the topological entropy of a class of shift spaces. The idea is the solve an inverse problem for the dynamical systems $beta x+alpha mod1$.The first part is an exposition of the $phi$-expansion applied to piecewise monotone dynamical systems. We formulate for the validity of the $phi$-expansion, necessary and sufficient conditions, which are different from those in Parrys paper.
We propose a new framework for the study of continuous time dynamical systems on networks. We view such dynamical systems as collections of interacting control systems. We show that a class of maps between graphs called graph fibrations give rise to maps between dynamical systems on networks. This allows us to produce conjugacy between dynamical systems out of combinatorial data. In particular we show that surjective graph fibrations lead to synchrony subspaces in networks. The injective graph fibrations, on the other hand, give rise to surjective maps from large dynamical systems to smaller ones. One can view these surjections as a kind of fast/slow variable decompositions or as abstractions in the computer science sense of the word.
Neural Networks (NNs) have been identified as a potentially powerful tool in the study of complex dynamical systems. A good example is the NN differential equation (DE) solver, which provides closed form, differentiable, functional approximations for the evolution of a wide variety of dynamical systems. A major disadvantage of such NN solvers can be the amount of computational resources needed to achieve accuracy comparable to existing numerical solvers. We present new strategies for existing dynamical system NN DE solvers, making efficient use of the textit{learnt} information, to speed up their training process, while still pursuing a completely unsupervised approach. We establish a fundamental connection between NN theory and dynamical systems theory via Koopman Operator Theory (KOT), by showing that the usual training processes for Neural Nets are fertile ground for identifying multiple Koopman operators of interest. We end by illuminating certain applications that KOT might have for NNs in general.