ترغب بنشر مسار تعليمي؟ اضغط هنا

Completion Time in Two-user Channels: An Information-Theoretic Perspective

126   0   0.0 ( 0 )
 نشر من قبل Yuanpeng Liu
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In a two-user channel, completion time refers to the number of channel uses spent by each user to transmit a bit pool with some given size. In this paper, the information-theoretic formulation of completion time is based on the concept of constrained rates, where users are allowed to employ different numbers of channel uses for transmission as opposed to the equal channel use of the standard information-theoretic formulation. Analogous to the capacity region, the completion time region characterizes all possible trade-offs among users completion times. For a multi-access channel, it is shown that the completion time region is achieved by operating the channel in two independent phases: a multi-access phase when both users are transmitting, and a point-to-point phase when one user has finished and the other is still transmitting. Using a similar two-phase approach, the completion time region (or inner and outer bounds) is established for a Gaussian broadcast channel and a Gaussian interference channel. It is observed that although consisting of two convex subregions, the completion time region may not be convex in general. Finally an optimization problem of minimizing the weighted sum completion time for a Gaussian multi-access channel and a Gaussian broadcast channel is solved, demonstrating the utility of the completion time approach.

قيم البحث

اقرأ أيضاً

This work identifies information-theoretic quantities that are closely related to the required list size for successive cancellation list (SCL) decoding to implement maximum-likelihood decoding. It also provides an approximation for these quantities that can be computed efficiently for very long codes. There is a concentration around the mean of the logarithm of the required list size for sufficiently large block lengths. We further provide a simple method to estimate the mean via density evolution for the binary erasure channel (BEC). Simulation results for the binary-input additive white Gaussian noise channel as well as the BEC demonstrate the accuracy of the mean estimate. A modified Reed-Muller code with dynamic frozen bits performs very close to the random coding union (RCU) bound down to the block error rate of $10^{-5}$ under SCL decoding with list size $L=128$ when the block length is $N=128$. The analysis shows how to modify the design to improve the performance when a more practical list size, e.g., $L=32$, is adopted while keeping the performance with $L=128$ unchanged. For the block length of $N=512$, a design performing within $0.4$ dB from the RCU bound down to the block error rate of $10^{-6}$ under an SCL decoder with list size $L=1024$ is provided. The design is modified using the new guidelines so that the performance improves with practical list sizes, e.g., $Lin{8,32,128}$, outperforming 5G designs.
A key practical constraint on the design of Hybrid automatic repeat request (HARQ) schemes is the size of the on-chip buffer that is available at the receiver to store previously received packets. In fact, in modern wireless standards such as LTE and LTE-A, the HARQ buffer size is one of the main drivers of the modem area and power consumption. This has recently highlighted the importance of HARQ buffer management, that is, of the use of buffer-aware transmission schemes and of advanced compression policies for the storage of received data. This work investigates HARQ buffer management by leveraging information-theoretic achievability arguments based on random coding. Specifically, standard HARQ schemes, namely Type-I, Chase Combining and Incremental Redundancy, are first studied under the assumption of a finite-capacity HARQ buffer by considering both coded modulation, via Gaussian signaling, and Bit Interleaved Coded Modulation (BICM). The analysis sheds light on the impact of different compression strategies, namely the conventional compression log-likelihood ratios and the direct digitization of baseband signals, on the throughput. Then, coding strategies based on layered modulation and optimized coding blocklength are investigated, highlighting the benefits of HARQ buffer-aware transmission schemes. The optimization of baseband compression for multiple-antenna links is also studied, demonstrating the optimality of a transform coding approach.
Given a probability measure $mu$ over ${mathbb R}^n$, it is often useful to approximate it by the convex combination of a small number of probability measures, such that each component is close to a product measure. Recently, Ronen Eldan used a stoch astic localization argument to prove a general decomposition result of this type. In Eldans theorem, the `number of components is characterized by the entropy of the mixture, and `closeness to product is characterized by the covariance matrix of each component. We present an elementary proof of Eldans theorem which makes use of an information theory (or estimation theory) interpretation. The proof is analogous to the one of an earlier decomposition result known as the `pinning lemma.
The characterisation of information processing is an important task in complex systems science. Information dynamics is a quantitative methodology for modelling the intrinsic information processing conducted by a process represented as a time series, but to date has only been formulated in discrete time. Building on previous work which demonstrated how to formulate transfer entropy in continuous time, we give a total account of information processing in this setting, incorporating information storage. We find that a convergent rate of predictive capacity, comprised of the transfer entropy and active information storage, does not exist, arising through divergent rates of active information storage. We identify that active information storage can be decomposed into two separate quantities that characterise predictive capacity stored in a process: active memory utilisation and instantaneous predictive capacity. The latter involves prediction related to path regularity and so solely inherits the divergent properties of the active information storage, whilst the former permits definitions of pathwise and rate quantities. We formulate measures of memory utilisation for jump and neural spiking processes and illustrate measures of information processing in synthetic neural spiking models and coupled Ornstein-Uhlenbeck models. The application to synthetic neural spiking models demonstrates that active memory utilisation for point processes consists of discontinuous jump contributions (at spikes) interrupting a continuously varying contribution (relating to waiting times between spikes), complementing the behaviour previously demonstrated for transfer entropy in these processes.
104 - Yizhou Zhao , Hua Sun 2021
In the robust secure aggregation problem, a server wishes to learn and only learn the sum of the inputs of a number of users while some users may drop out (i.e., may not respond). The identity of the dropped users is not known a priori and the server needs to securely recover the sum of the remaining surviving users. We consider the following minimal two-round model of secure aggregation. Over the first round, any set of no fewer than $U$ users out of $K$ users respond to the server and the server wants to learn the sum of the inputs of all responding users. The remaining users are viewed as dropped. Over the second round, any set of no fewer than $U$ users of the surviving users respond (i.e., dropouts are still possible over the second round) and from the information obtained from the surviving users over the two rounds, the server can decode the desired sum. The security constraint is that even if the server colludes with any $T$ users and the messages from the dropped users are received by the server (e.g., delayed packets), the server is not able to infer any additional information beyond the sum in the information theoretic sense. For this information theoretic secure aggregation problem, we characterize the optimal communication cost. When $U leq T$, secure aggregation is not feasible, and when $U > T$, to securely compute one symbol of the sum, the minimum number of symbols sent from each user to the server is $1$ over the first round, and $1/(U-T)$ over the second round.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا