Do you want to publish a course? Click here

Improved Upper Bound on the Network Function Computing Capacity

89   0   0.0 ( 0 )
 Added by Xuan Guang
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

The problem of network function computation over a directed acyclic network is investigated in this paper. In such a network, a sink node desires to compute with zero error a {em target function}, of which the inputs are generated at multiple source nodes. The edges in the network are assumed to be error-free and have limited capacity. The nodes in the network are assumed to have unbounded computing capability and be able to perform network coding. The {em computing rate} of a network code that can compute the target function over the network is the average number of times that the target function is computed with zero error for one use of the network. In this paper, we obtain an improved upper bound on the computing capacity, which is applicable to arbitrary target functions and arbitrary network topologies. This improved upper bound not only is an enhancement of the previous upper bounds but also is the first tight upper bound on the computing capacity for computing an arithmetic sum over a certain non-tree network, which has been widely studied in the literature. We also introduce a multi-dimensional array approach that facilitates evaluation of the improved upper bound. Furthermore, we apply this bound to the problem of computing a vector-linear function over a network. With this bound, we are able to not only enhance a previous result on computing a vector-linear function over a network but also simplify the proof significantly. Finally, we prove that for computing the binary maximum function over the reverse butterfly network, our improved upper bound is not achievable. This result establishes that in general our improved upper bound is non achievable, but whether it is asymptotically achievable or not remains open.



rate research

Read More

We derive a single-letter upper bound to the mismatched-decoding capacity for discrete memoryless channels. The bound is expressed as the mutual information of a transformation of the channel, such that a maximum-likelihood decoding error on the translated channel implies a mismatched-decoding error in the original channel. In particular, a strong converse is shown to hold for this upper-bound: if the rate exceeds the upper-bound, the probability of error tends to 1 exponentially when the block-length tends to infinity. We also show that the underlying optimization problem is a convex-concave problem and that an efficient iterative algorithm converges to the optimal solution. In addition, we show that, unlike achievable rates in the literature, the multiletter version of the bound does not improve. A number of examples are discussed throughout the paper.
This paper studies an $n$-dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and when $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, neither the exact number of mass points/shells of the optimal input distribution nor a bound on it was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution and produces the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. Roughly, the paper consists of three parts. The first part considers the case of $n=1$. The first result, in this part, shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the downward shifted capacity-achieving output probability density function (pdf). The second result, by showing a bound on the number of zeros of the downward shifted capacity-achieving output pdf, provides a first firm upper on the number of mass points. Specifically, it is shown that the number of mass points is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. The second part generalizes the results of the first part to the case of $n>1$. In particular, for every dimension $n>1$, it is shown that the number of shells is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
In this work, novel upper and lower bounds for the capacity of channels with arbitrary constraints on the support of the channel input symbols are derived. As an immediate practical application, the case of multiple-input multiple-output channels with amplitude constraints is considered. The bounds are shown to be within a constant gap if the channel matrix is invertible and are tight in the high amplitude regime for arbitrary channel matrices. Moreover, in the high amplitude regime, it is shown that the capacity scales linearly with the minimum between the number of transmit and receive antennas, similarly to the case of average power-constrained inputs.
This paper gives new methods of constructing {it symmetric self-dual codes} over a finite field $GF(q)$ where $q$ is a power of an odd prime. These methods are motivated by the well-known Pless symmetry codes and quadratic double circulant codes. Using these methods, we construct an amount of symmetric self-dual codes over $GF(11)$, $GF(19)$, and $GF(23)$ of every length less than 42. We also find 153 {it new} self-dual codes up to equivalence: they are $[32, 16, 12]$, $[36, 18, 13]$, and $[40, 20,14]$ codes over $GF(11)$, $[36, 18, 14]$ and $[40, 20, 15]$ codes over $GF(19)$, and $[32, 16, 12]$, $[36, 18, 14]$, and $[40, 20, 15]$ codes over $GF(23)$. They all have new parameters with respect to self-dual codes. Consequently, we improve bounds on the highest minimum distance of self-dual codes, which have not been significantly updated for almost two decades.
In this paper, we study the effect of a single link on the capacity of a network of error-free bit pipes. More precisely, we study the change in network capacity that results when we remove a single link of capacity $delta$. In a recent result, we proved that if all the sources are directly available to a single super-source node, then removing a link of capacity $delta$ cannot change the capacity region of the network by more than $delta$ in each dimension. In this paper, we extend this result to the case of multi-source, multi-sink networks for some special network topologies.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا