A weak form of the Circle Criterion for Lure systems is stated. The result allows prove global boundedness of all system solutions. Moreover such a result can be employed to enlarge the set of nonlinearities for which the standard Circle Criterion can guarantee absolute stability.
A recent model of Ariel et al. [1] for explaining the observation of Levy walks in swarming bacteria suggests that self-propelled, elongated particles in a periodic array of regular vortices perform a super-diffusion that is consistent with Levy walks. The equations of motion, which are reversible in time but not volume preserving, demonstrate a new route to Levy walking in chaotic systems. Here, the dynamics of the model is studied both analytically and numerically. It is shown that the apparent super-diffusion is due to sticking of trajectories to elliptic islands, regions of quasi-periodic orbits reminiscent of those seen in conservative systems. However, for certain parameter values, these islands coexist with asymptotically stable periodic trajectories, causing dissipative behavior on very long time scales.
We study chaotic orbits of conservative low--dimensional maps and present numerical results showing that the probability density functions (pdfs) of the sum of $N$ iterates in the large $N$ limit exhibit very interesting time-evolving statistics. In some cases where the chaotic layers are thin and the (positive) maximal Lyapunov exponent is small, long--lasting quasi--stationary states (QSS) are found, whose pdfs appear to converge to $q$--Gaussians associated with nonextensive statistical mechanics. More generally, however, as $N$ increases, the pdfs describe a sequence of QSS that pass from a $q$--Gaussian to an exponential shape and ultimately tend to a true Gaussian, as orbits diffuse to larger chaotic domains and the phase space dynamics becomes more uniformly ergodic.
We address the problem of robust state estimation of a class of discrete-time nonlinear systems with positive-slope nonlinearities when the sensors are corrupted by (potentially unbounded) attack signals and bounded measurement noise. We propose an observer-based estimator, using a bank of circle-criterion observers, which provides a robust estimate of the system state in spite of sensor attacks and measurement noise. We first consider the attack-free case where there is measurement noise and we provide a design method for a robust circle-criterion observer. Then, we consider the case when a sufficiently small subset of sensors are subject to attacks and all sensors are affected by measurement noise. We use our robust circle-criterion observer as the main ingredient in building an estimator that provides robust state estimation in this case. Finally, we propose an algorithm for isolating attacked sensors in the case of bounded measurement noise. We test this algorithm through simulations.
We establish a characterization of dualizing modules among semidualizing modules. Let R be a finite dimensional commutative Noetherian ring with identity and C a semidualizing R-module. We show that C is a dualizing R-module if and only if Tor_i^R(E,E) is C- injective for all C-injective R-modules E and E and all igeq 0.
We study bandits and reinforcement learning (RL) subject to a conservative constraint where the agent is asked to perform at least as well as a given baseline policy. This setting is particular relevant in real-world domains including digital marketing, healthcare, production, finance, etc. For multi-armed bandits, linear bandits and tabular RL, specialized algorithms and theoretical analyses were proposed in previous work. In this paper, we present a unified framework for conservative bandits and RL, in which our core technique is to calculate the necessary and sufficient budget obtained from running the baseline policy. For lower bounds, our framework gives a black-box reduction that turns a certain lower bound in the nonconservative setting into a new lower bound in the conservative setting. We strengthen the existing lower bound for conservative multi-armed bandits and obtain new lower bounds for conservative linear bandits, tabular RL and low-rank MDP. For upper bounds, our framework turns a certain nonconservative upper-confidence-bound (UCB) algorithm into a conservative algorithm with a simple analysis. For multi-armed bandits, linear bandits and tabular RL, our new upper bounds tighten or match existing ones with significantly simpler analyses. We also obtain a new upper bound for conservative low-rank MDP.