ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed Sequential Detection for Gaussian Shift-in-Mean Hypothesis Testing

87   0   0.0 ( 0 )
 نشر من قبل Soummya Kar
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper studies the problem of sequential Gaussian shift-in-mean hypothesis testing in a distributed multi-agent network. A sequential probability ratio test (SPRT) type algorithm in a distributed framework of the emph{consensus}+emph{innovations} form is proposed, in which the agents update their decision statistics by simultaneously processing latest observations (innovations) sensed sequentially over time and information obtained from neighboring agents (consensus). For each pre-specified set of type I and type II error probabilities, local decision parameters are derived which ensure that the algorithm achieves the desired error performance and terminates in finite time almost surely (a.s.) at each network agent. Large deviation exponents for the tail probabilities of the agent stopping time distributions are obtained and it is shown that asymptotically (in the number of agents or in the high signal-to-noise-ratio regime) these exponents associated with the distributed algorithm approach that of the optimal centralized detector. The expected stopping time for the proposed algorithm at each network agent is evaluated and is benchmarked with respect to the optimal centralized algorithm. The efficiency of the proposed algorithm in the sense of the expected stopping times is characterized in terms of network connectivity. Finally, simulation studies are presented which illustrate and verify the analytical findings.

قيم البحث

اقرأ أيضاً

In this paper, we consider the problem of distributed sequential detection using wireless sensor networks (WSNs) in the presence of imperfect communication channels between the sensors and the fusion center (FC). We assume that sensor observations ar e spatially dependent. We propose a copula-based distributed sequential detection scheme that characterizes the spatial dependence. Specifically, each local sensor collects observations regarding the phenomenon of interest and forwards the information obtained to the FC over noisy channels. The FC fuses the received messages using a copula-based sequential test. Moreover, we show the asymptotic optimality of the proposed copula-based sequential test. Numerical experiments are conducted to demonstrate the effectiveness of our approach.
330 - Eli Haim , Yuval Kochman 2017
We consider the problem of distributed binary hypothesis testing of two sequences that are generated by an i.i.d. doubly-binary symmetric source. Each sequence is observed by a different terminal. The two hypotheses correspond to different levels of correlation between the two source components, i.e., the crossover probability between the two. The terminals communicate with a decision function via rate-limited noiseless links. We analyze the tradeoff between the exponential decay of the two error probabilities associated with the hypothesis test and the communication rates. We first consider the side-information setting where one encoder is allowed to send the full sequence. For this setting, previous work exploits the fact that a decoding error of the source does not necessarily lead to an erroneous decision upon the hypothesis. We provide improved achievability results by carrying out a tighter analysis of the effect of binning error; the results are also more complete as they cover the full exponent tradeoff and all possible correlations. We then turn to the setting of symmetric rates for which we utilize Korner-Marton coding to generalize the results, with little degradation with respect to the performance with a one-sided constraint (side-information setting).
We consider sequential hypothesis testing between two quantum states using adaptive and non-adaptive strategies. In this setting, samples of an unknown state are requested sequentially and a decision to either continue or to accept one of the two hyp otheses is made after each test. Under the constraint that the number of samples is bounded, either in expectation or with high probability, we exhibit adaptive strategies that minimize both types of misidentification errors. Namely, we show that these errors decrease exponentially (in the stopping time) with decay rates given by the measured relative entropies between the two states. Moreover, if we allow joint measurements on multiple samples, the rates are increased to the respective quantum relative entropies. We also fully characterize the achievable error exponents for non-adaptive strategies and provide numerical evidence showing that adaptive measurements are necessary to achieve our bounds under some additional assumptions.
We study a hypothesis testing problem in which data is compressed distributively and sent to a detector that seeks to decide between two possible distributions for the data. The aim is to characterize all achievable encoding rates and exponents of th e type 2 error probability when the type 1 error probability is at most a fixed value. For related problems in distributed source coding, schemes based on random binning perform well and often optimal. For distributed hypothesis testing, however, the use of binning is hindered by the fact that the overall error probability may be dominated by errors in binning process. We show that despite this complication, binning is optimal for a class of problems in which the goal is to test against conditional independence. We then use this optimality result to give an outer bound for a more general class of instances of the problem.
In this paper, we investigate the role of a physical watermarking signal in quickest detection of a deception attack in a scalar linear control system where the sensor measurements can be replaced by an arbitrary stationary signal generated by an att acker. By adding a random watermarking signal to the control action, the controller designs a sequential test based on a Cumulative Sum (CUSUM) method that accumulates the log-likelihood ratio of the joint distribution of the residue and the watermarking signal (under attack) and the joint distribution of the innovations and the watermarking signal under no attack. As the average detection delay in such tests is asymptotically (as the false alarm rate goes to zero) upper bounded by a quantity inversely proportional to the Kullback-Leibler divergence(KLD) measure between the two joint distributions mentioned above, we analyze the effect of the watermarking signal variance on the above KLD. We also analyze the increase in the LQG control cost due to the watermarking signal, and show that there is a tradeoff between quick detection of attacks and the penalty in the control cost. It is shown that by considering a sequential detection test based on the joint distributions of residue/innovations and the watermarking signal, as opposed to the distributions of the residue/innovations only, we can achieve a higher KLD, thus resulting in a reduced average detection delay. Numerical results are provided to support our claims.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا