ترغب بنشر مسار تعليمي؟ اضغط هنا

Decentralized sequential active hypothesis testing and the MAC feedback capacity

115   0   0.0 ( 0 )
 نشر من قبل Achilleas Anastasopoulos
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of decentralized sequential active hypothesis testing (DSAHT), where two transmitting agents, each possessing a private message, are actively helping a third agent--and each other--to learn the message pair over a discrete memoryless multiple access channel (DM-MAC). The third agent (receiver) observes the noisy channel output, which is also available to the transmitting agents via noiseless feedback. We formulate this problem as a decentralized dynamic team, show that optimal transmission policies have a time-invariant domain, and characterize the solution through a dynamic program. Several alternative formulations are discussed involving time-homogenous cost functions and/or variable-length codes, resulting in solutions described through fixed-point, Bellman-type equations. Subsequently, we make connections with the problem of simplifying the multi-letter capacity expressions for the noiseless feedback capacity of the DM-MAC. We show that restricting attention to distributions induced by optimal transmission schemes for the DSAHT problem, without loss of optimality, transforms the capacity expression, so that it can be thought of as the average reward received by an appropriately defined stochastic dynamical system with time-invariant state space.

قيم البحث

اقرأ أيضاً

Information theory has been very successful in obtaining performance limits for various problems such as communication, compression and hypothesis testing. Likewise, stochastic control theory provides a characterization of optimal policies for Partia lly Observable Markov Decision Processes (POMDPs) using dynamic programming. However, finding optimal policies for these problems is computationally hard in general and thus, heuristic solutions are employed in practice. Deep learning can be used as a tool for designing better heuristics in such problems. In this paper, the problem of active sequential hypothesis testing is considered. The goal is to design a policy that can reliably infer the true hypothesis using as few samples as possible by adaptively selecting appropriate queries. This problem can be modeled as a POMDP and bounds on its value function exist in literature. However, optimal policies have not been identified and various heuristics are used. In this paper, two new heuristics are proposed: one based on deep reinforcement learning and another based on a KL-divergence zero-sum game. These heuristics are compared with state-of-the-art solutions and it is demonstrated using numerical experiments that the proposed heuristics can achieve significantly better performance than existing methods in some scenarios.
The main objective of this paper is to derive a new sequential characterization of the Cover and Pombra cite{cover-pombra1989} characterization of the $n-$finite block or transmission feedback information ($n$-FTFI) capacity, which clarifies several issues of confusion and incorrect interpretation of results in literature. The optimal channel input processes of the new equivalent sequential characterizations are expressed as functionals of a sufficient statistic and a Gaussian orthogonal innovations process. From the new representations follows that the Cover and Pombra characterization of the $n-$FTFI capacity is expressed as a functional of two generalized matrix difference Riccati equations (DRE) of filtering theory of Gaussian systems. This contradicts results which are redundant in the literature, and illustrates the fundamental complexity of the feedback capacity formula.
330 - Eli Haim , Yuval Kochman 2017
We consider the problem of distributed binary hypothesis testing of two sequences that are generated by an i.i.d. doubly-binary symmetric source. Each sequence is observed by a different terminal. The two hypotheses correspond to different levels of correlation between the two source components, i.e., the crossover probability between the two. The terminals communicate with a decision function via rate-limited noiseless links. We analyze the tradeoff between the exponential decay of the two error probabilities associated with the hypothesis test and the communication rates. We first consider the side-information setting where one encoder is allowed to send the full sequence. For this setting, previous work exploits the fact that a decoding error of the source does not necessarily lead to an erroneous decision upon the hypothesis. We provide improved achievability results by carrying out a tighter analysis of the effect of binning error; the results are also more complete as they cover the full exponent tradeoff and all possible correlations. We then turn to the setting of symmetric rates for which we utilize Korner-Marton coding to generalize the results, with little degradation with respect to the performance with a one-sided constraint (side-information setting).
In this paper, we propose a Bayesian Hypothesis Testing Algorithm (BHTA) for sparse representation. It uses the Bayesian framework to determine active atoms in sparse representation of a signal. The Bayesian hypothesis testing based on three assump tions, determines the active atoms from the correlations and leads to the activity measure as proposed in Iterative Detection Estimation (IDE) algorithm. In fact, IDE uses an arbitrary decreasing sequence of thresholds while the proposed algorithm is based on a sequence which derived from hypothesis testing. So, Bayesian hypothesis testing framework leads to an improved version of the IDE algorithm. The simulations show that Hard-version of our suggested algorithm achieves one of the best results in terms of estimation accuracy among the algorithms which have been implemented in our simulations, while it has the greatest complexity in terms of simulation time.
The utility of limited feedback for coding over an individual sequence of DMCs is investigated. This study complements recent results showing how limited or noisy feedback can boost the reliability of communication. A strategy with fixed input distri bution $P$ is given that asymptotically achieves rates arbitrarily close to the mutual information induced by $P$ and the state-averaged channel. When the capacity achieving input distribution is the same over all channel states, this achieves rates at least as large as the capacity of the state averaged channel, sometimes called the empirical capacity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا