ترغب بنشر مسار تعليمي؟ اضغط هنا

Quickest Change Detection in Adaptive Censoring Sensor Networks

126   0   0.0 ( 0 )
 نشر من قبل Xiaoqiang Ren
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The problem of quickest change detection with communication rate constraints is studied. A network of wireless sensors with limited computation capability monitors the environment and sends observations to a fusion center via wireless channels. At an unknown time instant, the distributions of observations at all the sensor nodes change simultaneously. Due to limited energy, the sensors cannot transmit at all the time instants. The objective is to detect the change at the fusion center as quickly as possible, subject to constraints on false detection and average communication rate between the sensors and the fusion center. A minimax formulation is proposed. The cumulative sum (CuSum) algorithm is used at the fusion center and censoring strategies are used at the sensor nodes. The censoring strategies, which are adaptive to the CuSum statistic, are fed back by the fusion center. The sensors only send observations that fall into prescribed sets to the fusion center. This CuSum adaptive censoring (CuSum-AC) algorithm is proved to be an equalizer rule and to be globally asymptotically optimal for any positive communication rate constraint, as the average run length to false alarm goes to infinity. It is also shown, by numerical examples, that the CuSum-AC algorithm provides a suitable trade-off between the detection performance and the communication rate.



قيم البحث

اقرأ أيضاً

Change detection (CD) in time series data is a critical problem as it reveal changes in the underlying generative processes driving the time series. Despite having received significant attention, one important unexplored aspect is how to efficiently utilize additional correlated information to improve the detection and the understanding of changepoints. We propose hierarchical quickest change detection (HQCD), a framework that formalizes the process of incorporating additional correlated sources for early changepoint detection. The core ideas behind HQCD are rooted in the theory of quickest detection and HQCD can be regarded as its novel generalization to a hierarchical setting. The sources are classified into targets and surrogates, and HQCD leverages this structure to systematically assimilate observed data to update changepoint statistics across layers. The decision on actual changepoints are provided by minimizing the delay while still maintaining reliability bounds. In addition, HQCD also uncovers interesting relations between changes at targets from changes across surrogates. We validate HQCD for reliability and performance against several state-of-the-art methods for both synthetic dataset (known changepoints) and several real-life examples (unknown changepoints). Our experiments indicate that we gain significant robustness without loss of detection delay through HQCD. Our real-life experiments also showcase the usefulness of the hierarchical setting by connecting the surrogate sources (such as Twitter chatter) to target sources (such as Employment related protests that ultimately lead to major uprisings).
The problem of quickest detection of a change in the mean of a sequence of independent observations is studied. The pre-change distribution is assumed to be stationary, while the post-change distributions are allowed to be non-stationary. The case wh ere the pre-change distribution is known is studied first, and then the extension where only the mean and variance of the pre-change distribution are known. No knowledge of the post-change distributions is assumed other than that their means are above some pre-specified threshold larger than the pre-change mean. For the case where the pre-change distribution is known, a test is derived that asymptotically minimizes the worst-case detection delay over all possible post-change distributions, as the false alarm rate goes to zero. Towards deriving this asymptotically optimal test, some new results are provided for the general problem of asymptotic minimax robust quickest change detection in non-stationary settings. Then, the limiting form of the optimal test is studied as the gap between the pre- and post-change means goes to zero, called the Mean-Change Test (MCT). It is shown that the MCT can be designed with only knowledge of the mean and variance of the pre-change distribution. The performance of the MCT is also characterized when the mean gap is moderate, under the additional assumption that the distributions of the observations have bounded support. The analysis is validated through numerical results for detecting a change in the mean of a beta distribution. The use of the MCT in monitoring pandemics is also demonstrated.
The Byzantine distributed quickest change detection (BDQCD) is studied, where a fusion center monitors the occurrence of an abrupt event through a bunch of distributed sensors that may be compromised. We first consider the binary hypothesis case wher e there is only one post-change hypothesis and prove a novel converse to the first-order asymptotic detection delay in the large mean time to a false alarm regime. This converse is tight in that it coincides with the currently best achievability shown by Fellouris et al.; hence, the optimal asymptotic performance of binary BDQCD is characterized. An important implication of this result is that, even with compromised sensors, a 1-bit link between each sensor and the fusion center suffices to achieve asymptotic optimality. To accommodate multiple post-change hypotheses, we then formulate the multi-hypothesis BDQCD problem and again investigate the optimal first-order performance under different bandwidth constraints. A converse is first obtained by extending our converse from binary to multi-hypothesis BDQCD. Two families of stopping rules, namely the simultaneous $d$-th alarm and the multi-shot $d$-th alarm, are then proposed. Under sufficient link bandwidth, the simultaneous $d$-th alarm, with $d$ being set to the number of honest sensors, can achieve the asymptotic performance that coincides with the derived converse bound; hence, the asymptotically optimal performance of multi-hypothesis BDQCD is again characterized. Moreover, although being shown to be asymptotically optimal only for some special cases, the multi-shot $d$-th alarm is much more bandwidth-efficient and energy-efficient than the simultaneous $d$-th alarm. Built upon the above success in characterizing the asymptotic optimality of the BDQCD, a corresponding leader-follower Stackelberg game is formulated and its solution is found.
134 - Subhrakanti Dey 2020
In this paper, we consider a non-Bayesian sequential change detection based on the Cumulative Sum (CUSUM) algorithm employed by an energy harvesting sensor where the distributions before and after the change are assumed to be known. In a slotted disc rete-time model, the sensor, exclusively powered by randomly available harvested energy, obtains a sample and computes the log-likelihood ratio of the two distributions if it has enough energy to sense and process a sample. If it does not have enough energy in a given slot, it waits until it harvests enough energy to perform the task in a future time slot. We derive asymptotic expressions for the expected detection delay (when a change actually occurs), and the asymptotic tail distribution of the run-length to a false alarm (when a change never happens). We show that when the average harvested energy ($bar H$) is greater than or equal to the energy required to sense and process a sample ($E_s$), standard existing asymptotic results for the CUSUM test apply since the energy storage level at the sensor is greater than $E_s$ after a sufficiently long time. However, when the $bar H < E_s$, the energy storage level can be modelled by a positive Harris recurrent Markov chain with a unique stationary distribution. Using asymptotic results from Markov random walk theory and associated nonlinear Markov renewal theory, we establish asymptotic expressions for the expected detection delay and asymptotic exponentiality of the tail distribution of the run-length to a false alarm in this non-trivial case. Numerical results are provided to support the theoretical results.
Wireless sensor-actuator networks offer flexibility for control design. One novel element which may arise in networks with multiple nodes is that the role of some nodes does not need to be fixed. In particular, there is no need to pre-allocate which nodes assume controller functions and which ones merely relay data. We present a flexible architecture for networked control using multiple nodes connected in series over analog erasure channels without acknowledgments. The control architecture proposed adapts to changes in network conditions, by allowing the role played by individual nodes to depend upon transmission outcomes. We adopt stochastic models for transmission outcomes and characterize the distribution of controller location and the covariance of system states. Simulation results illustrate that the proposed architecture has the potential to give better performance than limiting control calculations to be carried out at a fixed node.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا