ترغب بنشر مسار تعليمي؟ اضغط هنا

Capacities and Optimal Input Distributions for Particle-Intensity Channels

47   0   0.0 ( 0 )
 نشر من قبل Nariman Farsad
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work introduces the particle-intensity channel (PIC) as a model for molecular communication systems and characterizes the capacity limits as well as properties of the optimal (capacity-achieving) input distributions for such channels. In the PIC, the transmitter encodes information, in symbols of a given duration, based on the probability of particle release, and the receiver detects and decodes the message based on the number of particles detected during the symbol interval. In this channel, the transmitter may be unable to control precisely the probability of particle release, and the receiver may not detect all the particles that arrive. We model this channel using a generalization of the binomial channel and show that the capacity-achieving input distribution for this channel always has mass points at probabilities of particle release of zero and one. To find the capacity-achieving input distributions, we develop an efficient algorithm we call dynamic assignment Blahut-Arimoto (DAB). For diffusive particle transport, we also derive the conditions under which the input with two mass points is capacity-achieving.



قيم البحث

اقرأ أيضاً

We consider the problem of quantifying the Pareto optimal boundary in the achievable rate region over multiple-input single-output (MISO) interference channels, where the problem boils down to solving a sequence of convex feasibility problems after c ertain transformations. The feasibility problem is solved by two new distributed optimal beamforming algorithms, where the first one is to parallelize the computation based on the method of alternating projections, and the second one is to localize the computation based on the method of cyclic projections. Convergence proofs are established for both algorithms.
144 - Youjian Liu 2008
Given the single-letter capacity formula and the converse proof of a channel without constraints, we provide a simple approach to extend the results for the same channel but with constraints. The resulting capacity formula is the minimum of a Lagrang e dual function. It gives an unified formula in the sense that it works regardless whether the problem is convex. If the problem is non-convex, we show that the capacity can be larger than the formula obtained by the naive approach of imposing constraints on the maximization in the capacity formula of the case without the constraints. The extension on the converse proof is simply by adding a term involving the Lagrange multiplier and the constraints. The rest of the proof does not need to be changed. We name the proof method the Lagrangian Converse Proof. In contrast, traditional approaches need to construct a better input distribution for convex problems or need to introduce a time sharing variable for non-convex problems. We illustrate the Lagrangian Converse Proof for three channels, the classic discrete time memoryless channel, the channel with non-causal channel-state information at the transmitter, the channel with limited channel-state feedback. The extension to the rate distortion theory is also provided.
We consider the problem of minimizing the age of information when a source can transmit status updates over two heterogeneous channels. Our work is motivated by recent developments in 5G mmWave technology, where transmissions may occur over an unreli able but fast (e.g., mmWave) channel or a slow reliable (e.g., sub-6GHz) channel. The unreliable channel is modeled as a time-correlated Gilbert-Elliot channel at a high rate when the channel is in the ON state. The reliable channel provides a deterministic but lower data rate. The scheduling strategy determines the channel to be used for transmission in each time slot, aiming to minimize the time-average age of information (AoI). The optimal scheduling problem is formulated as a Markov Decision Process (MDP), which is challenging to solve because super-modularity does not hold in a part of the state space. We address this challenge and show that a multi-dimensional threshold-type scheduling policy is optimal for minimizing the age. By exploiting the structure of the MDP and analyzing the discrete-time Markov chains (DTMCs) of the threshold-type policy, we devise a low-complexity bisection algorithm to compute the optimal thresholds. We compare different scheduling policies using numerical simulations.
New upper and lower bounds are presented on the capacity of the free-space optical intensity channel. This channel is characterized by inputs that are nonnegative (representing the transmitted optical intensity) and by outputs that are corrupted by a dditive white Gaussian noise (because in free space the disturbances arise from many independent sources). Due to battery and safety reasons the inputs are simultaneously constrained in both their average and peak power. For a fixed ratio of the average power to the peak power the difference between the upper and the lower bounds tends to zero as the average power tends to infinity, and the ratio of the upper and lower bounds tends to one as the average power tends to zero. The case where only an average-power constraint is imposed on the input is treated separately. In this case, the difference of the upper and lower bound tends to 0 as the average power tends to infinity, and their ratio tends to a constant as the power tends to zero.
124 - Erdal Arikan 2009
A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels ${W_N^{(i)}:1le ile N}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels ${W_N^{(i)}}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R < I(W)$, there exists a sequence of polar codes ${{mathscr C}_n;nge 1}$ such that ${mathscr C}_n$ has block-length $N=2^n$, rate $ge R$, and probability of block error under successive cancellation decoding bounded as $P_{e}(N,R) le bigoh(N^{-frac14})$ independently of the code rate. This performance is achievable by encoders and decoders with complexity $O(Nlog N)$ for each.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا