No Arabic abstract
The problem of quickest detection of a change in the mean of a sequence of independent observations is studied. The pre-change distribution is assumed to be stationary, while the post-change distributions are allowed to be non-stationary. The case where the pre-change distribution is known is studied first, and then the extension where only the mean and variance of the pre-change distribution are known. No knowledge of the post-change distributions is assumed other than that their means are above some pre-specified threshold larger than the pre-change mean. For the case where the pre-change distribution is known, a test is derived that asymptotically minimizes the worst-case detection delay over all possible post-change distributions, as the false alarm rate goes to zero. Towards deriving this asymptotically optimal test, some new results are provided for the general problem of asymptotic minimax robust quickest change detection in non-stationary settings. Then, the limiting form of the optimal test is studied as the gap between the pre- and post-change means goes to zero, called the Mean-Change Test (MCT). It is shown that the MCT can be designed with only knowledge of the mean and variance of the pre-change distribution. The performance of the MCT is also characterized when the mean gap is moderate, under the additional assumption that the distributions of the observations have bounded support. The analysis is validated through numerical results for detecting a change in the mean of a beta distribution. The use of the MCT in monitoring pandemics is also demonstrated.
We study the problem of quickest detection of a change in the mean of an observation sequence, under the assumption that both the pre- and post-change distributions have bounded support. We first study the case where the pre-change distribution is known, and then study the extension where only the mean and variance of the pre-change distribution are known. In both cases, no knowledge of the post-change distribution is assumed other than that it has bounded support. For the case where the pre-change distribution is known, we derive a test that asymptotically minimizes the worst-case detection delay over all post-change distributions, as the false alarm rate goes to zero. We then study the limiting form of the optimal test as the gap between the pre- and post-change means goes to zero, which we call the Mean-Change Test (MCT). We show that the MCT can be designed with only knowledge of the mean and variance of the pre-change distribution. We validate our analysis through numerical results for detecting a change in the mean of a beta distribution. We also demonstrate the use of the MCT for pandemic monitoring.
Change detection (CD) in time series data is a critical problem as it reveal changes in the underlying generative processes driving the time series. Despite having received significant attention, one important unexplored aspect is how to efficiently utilize additional correlated information to improve the detection and the understanding of changepoints. We propose hierarchical quickest change detection (HQCD), a framework that formalizes the process of incorporating additional correlated sources for early changepoint detection. The core ideas behind HQCD are rooted in the theory of quickest detection and HQCD can be regarded as its novel generalization to a hierarchical setting. The sources are classified into targets and surrogates, and HQCD leverages this structure to systematically assimilate observed data to update changepoint statistics across layers. The decision on actual changepoints are provided by minimizing the delay while still maintaining reliability bounds. In addition, HQCD also uncovers interesting relations between changes at targets from changes across surrogates. We validate HQCD for reliability and performance against several state-of-the-art methods for both synthetic dataset (known changepoints) and several real-life examples (unknown changepoints). Our experiments indicate that we gain significant robustness without loss of detection delay through HQCD. Our real-life experiments also showcase the usefulness of the hierarchical setting by connecting the surrogate sources (such as Twitter chatter) to target sources (such as Employment related protests that ultimately lead to major uprisings).
Structural breaks have been commonly seen in applications. Specifically for detection of change points in time, research gap still remains on the setting in ultra high dimension, where the covariates may bear spurious correlations. In this paper, we propose a two-stage approach to detect change points in ultra high dimension, by firstly proposing the dynamic titled current correlation screening method to reduce the input dimension, and then detecting possible change points in the framework of group variable selection. Not only the spurious correlation between ultra-high dimensional covariates is taken into consideration in variable screening, but non-convex penalties are studied in change point detection in the ultra high dimension. Asymptotic properties are derived to guarantee the asymptotic consistency of the selection procedure, and the numerical investigations show the promising performance of the proposed approach.
The Byzantine distributed quickest change detection (BDQCD) is studied, where a fusion center monitors the occurrence of an abrupt event through a bunch of distributed sensors that may be compromised. We first consider the binary hypothesis case where there is only one post-change hypothesis and prove a novel converse to the first-order asymptotic detection delay in the large mean time to a false alarm regime. This converse is tight in that it coincides with the currently best achievability shown by Fellouris et al.; hence, the optimal asymptotic performance of binary BDQCD is characterized. An important implication of this result is that, even with compromised sensors, a 1-bit link between each sensor and the fusion center suffices to achieve asymptotic optimality. To accommodate multiple post-change hypotheses, we then formulate the multi-hypothesis BDQCD problem and again investigate the optimal first-order performance under different bandwidth constraints. A converse is first obtained by extending our converse from binary to multi-hypothesis BDQCD. Two families of stopping rules, namely the simultaneous $d$-th alarm and the multi-shot $d$-th alarm, are then proposed. Under sufficient link bandwidth, the simultaneous $d$-th alarm, with $d$ being set to the number of honest sensors, can achieve the asymptotic performance that coincides with the derived converse bound; hence, the asymptotically optimal performance of multi-hypothesis BDQCD is again characterized. Moreover, although being shown to be asymptotically optimal only for some special cases, the multi-shot $d$-th alarm is much more bandwidth-efficient and energy-efficient than the simultaneous $d$-th alarm. Built upon the above success in characterizing the asymptotic optimality of the BDQCD, a corresponding leader-follower Stackelberg game is formulated and its solution is found.
The problem of quickest change detection with communication rate constraints is studied. A network of wireless sensors with limited computation capability monitors the environment and sends observations to a fusion center via wireless channels. At an unknown time instant, the distributions of observations at all the sensor nodes change simultaneously. Due to limited energy, the sensors cannot transmit at all the time instants. The objective is to detect the change at the fusion center as quickly as possible, subject to constraints on false detection and average communication rate between the sensors and the fusion center. A minimax formulation is proposed. The cumulative sum (CuSum) algorithm is used at the fusion center and censoring strategies are used at the sensor nodes. The censoring strategies, which are adaptive to the CuSum statistic, are fed back by the fusion center. The sensors only send observations that fall into prescribed sets to the fusion center. This CuSum adaptive censoring (CuSum-AC) algorithm is proved to be an equalizer rule and to be globally asymptotically optimal for any positive communication rate constraint, as the average run length to false alarm goes to infinity. It is also shown, by numerical examples, that the CuSum-AC algorithm provides a suitable trade-off between the detection performance and the communication rate.