ترغب بنشر مسار تعليمي؟ اضغط هنا

Opportunistic Detection Rules: Finite and Asymptotic Analysis

81   0   0.0 ( 0 )
 نشر من قبل Wenyi Zhang
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Opportunistic detection rules (ODRs) are variants of fixed-sample-size detection rules in which the statistician is allowed to make an early decision on the alternative hypothesis opportunistically based on the sequentially observed samples. From a sequential decision perspective, ODRs are also mixtures of one-sided and truncated sequential detection rules. Several results regarding ODRs are established in this paper. In the finite regime, the maximum sample size is modeled either as a fixed finite number, or a geometric random variable with a fixed finite mean. For both cases, the corresponding Bayesian formulations are investigated. The former case is a slight variation of the well-known finite-length sequential hypothesis testing procedure in the literature, whereas the latter case is new, for which the Bayesian optimal ODR is shown to be a sequence of likelihood ratio threshold tests with two different thresholds: a running threshold, which is determined by solving a stationary state equation, is used when future samples are still available, and a terminal threshold (simply the ratio between the priors scaled by costs) is used when the statistician reaches the final sample and thus has to make a decision immediately. In the asymptotic regime, the tradeoff among the exponents of the (false alarm and miss) error probabilities and the normalized expected stopping time under the alternative hypothesis is completely characterized and proved to be tight, via an information-theoretic argument. Within the tradeoff region, one noteworthy fact is that the performance of the Stein-Chernoff Lemma is attainable by ODRs.



قيم البحث

اقرأ أيضاً

The Byzantine distributed quickest change detection (BDQCD) is studied, where a fusion center monitors the occurrence of an abrupt event through a bunch of distributed sensors that may be compromised. We first consider the binary hypothesis case wher e there is only one post-change hypothesis and prove a novel converse to the first-order asymptotic detection delay in the large mean time to a false alarm regime. This converse is tight in that it coincides with the currently best achievability shown by Fellouris et al.; hence, the optimal asymptotic performance of binary BDQCD is characterized. An important implication of this result is that, even with compromised sensors, a 1-bit link between each sensor and the fusion center suffices to achieve asymptotic optimality. To accommodate multiple post-change hypotheses, we then formulate the multi-hypothesis BDQCD problem and again investigate the optimal first-order performance under different bandwidth constraints. A converse is first obtained by extending our converse from binary to multi-hypothesis BDQCD. Two families of stopping rules, namely the simultaneous $d$-th alarm and the multi-shot $d$-th alarm, are then proposed. Under sufficient link bandwidth, the simultaneous $d$-th alarm, with $d$ being set to the number of honest sensors, can achieve the asymptotic performance that coincides with the derived converse bound; hence, the asymptotically optimal performance of multi-hypothesis BDQCD is again characterized. Moreover, although being shown to be asymptotically optimal only for some special cases, the multi-shot $d$-th alarm is much more bandwidth-efficient and energy-efficient than the simultaneous $d$-th alarm. Built upon the above success in characterizing the asymptotic optimality of the BDQCD, a corresponding leader-follower Stackelberg game is formulated and its solution is found.
Generalized low-density parity-check (GLDPC) codes are a class of LDPC codes in which the standard single parity check (SPC) constraints are replaced by constraints defined by a linear block code. These stronger constraints typically result in improv ed error floor performance, due to better minimum distance and trapping set properties, at a cost of some increased decoding complexity. In this paper, we study spatially coupled generalized low-density parity-check (SC-GLDPC) codes and present a comprehensive analysis of these codes, including: (1) an iterative decoding threshold analysis of SC-GLDPC code ensembles demonstrating capacity approaching thresholds via the threshold saturation effect; (2) an asymptotic analysis of the minimum distance and free distance properties of SC-GLDPC code ensembles, demonstrating that the ensembles are asymptotically good; and (3) an analysis of the finite-length scaling behavior of both GLDPC block codes and SC-GLDPC codes based on a peeling decoder (PD) operating on a binary erasure channel (BEC). Results are compared to GLDPC block codes, and the advantages and disadvantages of SC-GLDPC codes are discussed.
120 - Rui Zhang , John Cioffi 2009
This paper studies a new decentralized resource allocation strategy, named iterative spectrum shaping (ISS), for the multi-carrier-based multiuser communication system, where two coexisting users independently and sequentially update transmit power a llocations over parallel subcarriers to maximize their individual transmit rates. Unlike the conventional iterative water-filling (IWF) algorithm that applies the single-user detection (SD) at each users receiver by treating the interference from the other user as additional noise, the proposed ISS algorithm applies multiuser detection techniques to decode both the desired users and interference users messages if it is feasible, thus termed as opportunistic multiuser detection (OMD). Two encoding methods are considered for ISS: One is carrier independent encoding where independent codewords are modulated by different subcarriers for which different decoding methods can be applied; the other is carrier joint encoding where a single codeword is modulated by all the subcarriers for which a single decoder is applied. For each encoding method, this paper presents the associated optimal user power and rate allocation strategy at each iteration of transmit adaptation. It is shown that under many circumstances the proposed ISS algorithm employing OMD is able to achieve substantial throughput gains over the conventional IWF algorithm employing SD for decentralized spectrum sharing. Applications of ISS in cognitive radio communication systems are also discussed.
Opportunistic scheduling (OS) schemes have been proposed previously by the authors for multiuser MIMO-SDMA downlink systems with linear combining. In particular, it has been demonstrated that significant performance improvement can be achieved by inc orporating low-complexity linear combining techniques into the design of OS schemes for MIMO-SDMA. However, this previous analysis was performed based on the effective signal-to-interference ratio (SIR), assuming an interference-limited scenario, which is typically a valid assumption in SDMA-based systems. It was shown that the limiting distribution of the effective SIR is of the Frechet type. Surprisingly, the corresponding scaling laws were found to follow $epsilonlog K$ with $0<epsilon<1$, rather than the conventional $loglog K$ form. Inspired by this difference between the scaling law forms, in this paper a systematic approach is developed to derive asymptotic throughput and scaling laws based on signal-to-interference-noise ratio (SINR) by utilizing extreme value theory. The convergence of the limiting distribution of the effective SINR to the Gumbel type is established. The resulting scaling law is found to be governed by the conventional $loglog K$ form. These novel results are validated by simulation results. The comparison of SIR and SINR-based analysis suggests that the SIR-based analysis is more computationally efficient for SDMA-based systems and it captures the asymptotic system performance with higher fidelity.
The problem of verifying whether a multi-component system has anomalies or not is addressed. Each component can be probed over time in a data-driven manner to obtain noisy observations that indicate whether the selected component is anomalous or not. The aim is to minimize the probability of incorrectly declaring the system to be free of anomalies while ensuring that the probability of correctly declaring it to be safe is sufficiently large. This problem is modeled as an active hypothesis testing problem in the Neyman-Pearson setting. Component-selection and inference strategies are designed and analyzed in the non-asymptotic regime. For a specific class of homogeneous problems, stronger (with respect to prior work) non-asymptotic converse and achievability bounds are provided.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا