No Arabic abstract
This paper presents a system identification technique for systems whose output is asymptotically periodic under constant inputs. The model used for system identification is a discrete-time Lure model consisting of asymptotically stable linear dynamics, a time delay, a washout filter, and a static nonlinear feedback mapping. For all sufficiently large scalings of the loop transfer function, these components cause divergence under small signal levels and decay under large signal amplitudes, thus producing an asymptotically oscillatory output. A bias-generation mechanism is used to provide a bias in the oscillation. The contribution of the paper is a least-squares technique that estimates the coefficients of the linear model as well as the parameterization of the continuous, piecewise-linear feedback mapping.
Many nonlinear dynamical systems can be written as Lure systems, which are described by a linear time-invariant system interconnected with a diagonal static sector-bounded nonlinearity. Sufficient conditions are derived for the global asymptotic stability analysis of discrete-time Lure systems in which the nonlinearities have restricted slope and/or are odd, which is the usual case in real applications. A Lure-Postnikov-type Lyapunov function is proposed that is used to derive sufficient analysis conditions in terms of linear matrix inequalities (LMIs). The derived stability critera are provably less conservative than criteria published in the literature, with numerical examples indicating that conservatism can be reduced by orders of magnitude.
In this paper, we discuss several concepts of the modern theory of discrete integrable systems, including: - Time discretization based on the notion of Backlund transformation; - Symplectic realizations of multi-Hamiltonian structures; - Interrelations between discrete 1D systems and lattice 2D systems; - Multi-dimensional consistency as integrability of discrete systems; - Interrelations between integrable systems of quad-equations and integrable systems of Laplace type; - Pluri-Lagrangian structure as integrability of discrete variational systems. All these concepts are illustrated by the discrete time Toda lattices and their relativistic analogs.
The visibility of the two-photon interference in the Franson interferometer serves as a measure of the energy-time entanglement of the photons. We propose to control the visibility of the interference in the second-order coherence function by implementing a coherent time-delayed feedback mechanism. Simulating the non-Markovian dynamics within the matrix product state framework, we find that the visibility for two photons emitted from a three-level system (3LS) in ladder configuration can be enhanced significantly for a wide range of parameters by slowing down the decay of the upper level of the 3LS.
Trajectory optimization considers the problem of deciding how to control a dynamical system to move along a trajectory which minimizes some cost function. Differential Dynamic Programming (DDP) is an optimal control method which utilizes a second-order approximation of the problem to find the control. It is fast enough to allow real-time control and has been shown to work well for trajectory optimization in robotic systems. Here we extend classic DDP to systems with multiple time-delays in the state. Being able to find optimal trajectories for time-delayed systems with DDP opens up the possibility to use richer models for system identification and control, including recurrent neural networks with multiple timesteps in the state. We demonstrate the algorithm on a two-tank continuous stirred tank reactor. We also demonstrate the algorithm on a recurrent neural network trained to model an inverted pendulum with position information only.
This paper introduces an algorithm for discovering implicit and delayed causal relations between events observed by a robot at arbitrary times, with the objective of improving data-efficiency and interpretability of model-based reinforcement learning (RL) techniques. The proposed algorithm initially predicts observations with the Markov assumption, and incrementally introduces new hidden variables to explain and reduce the stochasticity of the observations. The hidden variables are memory units that keep track of pertinent past events. Such events are systematically identified by their information gains. The learned transition and reward models are then used for planning. Experiments on simulated and real robotic tasks show that this method significantly improves over current RL techniques.