Do you want to publish a course? Click here

Is Non-Unique Decoding Necessary?

161   0   0.0 ( 0 )
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

In multi-terminal communication systems, signals carrying messages meant for different destinations are often observed together at any given destination receiver. Han and Kobayashi (1981) proposed a receiving strategy which performs a joint unique decoding of messages of interest along with a subset of messages which are not of interest. It is now well-known that this provides an achievable region which is, in general, larger than if the receiver treats all messages not of interest as noise. Nair and El Gamal (2009) and Chong, Motani, Garg, and El Gamal (2008) independently proposed a generalization called indirect or non-unique decoding where the receiver uses the codebook structure of the messages to uniquely decode only its messages of interest. Non-unique decoding has since been used in various scenarios. The main result in this paper is to provide an interpretation and a systematic proof technique for why non-unique decoding, in all known cases where it has been employed, can be replaced by a particularly designed joint unique decoding strategy, without any penalty from a rate region viewpoint.



rate research

Read More

A new class of folded subspace codes for noncoherent network coding is presented. The codes can correct insertions and deletions beyond the unique decoding radius for any code rate $Rin[0,1]$. An efficient interpolation-based decoding algorithm for this code construction is given which allows to correct insertions and deletions up to the normalized radius $s(1-((1/h+h)/(h-s+1))R)$, where $h$ is the folding parameter and $sleq h$ is a decoding parameter. The algorithm serves as a list decoder or as a probabilistic unique decoder that outputs a unique solution with high probability. An upper bound on the average list size of (folded) subspace codes and on the decoding failure probability is derived. A major benefit of the decoding scheme is that it enables probabilistic unique decoding up to the list decoding radius.
We address the problem of decoding Gabidulin codes beyond their unique error-correction radius. The complexity of this problem is of importance to assess the security of some rank-metric code-based cryptosystems. We propose an approach that introduces row or column erasures to decrease the rank of the error in order to use any proper polynomial-time Gabidulin code error-erasure decoding algorithm. This approach improves on generic rank-metric decoders by an exponential factor.
The goal of threshold group testing is to identify up to $d$ defective items among a population of $n$ items, where $d$ is usually much smaller than $n$. A test is positive if it has at least $u$ defective items and negative otherwise. Our objective is to identify defective items in sublinear time the number of items, e.g., $mathrm{poly}(d, ln{n}),$ by using the number of tests as low as possible. In this paper, we reduce the number of tests to $O left( h times frac{d^2 ln^2{n}}{mathsf{W}^2(d ln{n})} right)$ and the decoding time to $O left( mathrm{dec}_0 times h right),$ where $mathrm{dec}_0 = O left( frac{d^{3.57} ln^{6.26}{n}}{mathsf{W}^{6.26}(d ln{n})} right) + O left( frac{d^6 ln^4{n}}{mathsf{W}^4(d ln{n})} right)$, $h = Oleft( frac{d_0^2 ln{frac{n}{d_0}}}{(1-p)^2} right)$ , $d_0 = max{u, d - u }$, $p in [0, 1),$ and $mathsf{W}(x) = Theta left( ln{x} - ln{ln{x}} right).$ If the number of tests is increased to $Oleft( h times frac{d^2ln^3{n}}{mathsf{W}^2(d ln{n})} right),$ the decoding complexity is reduced to $O left(mathrm{dec}_1 times h right),$ where $mathrm{dec}_1 = max left{ frac{d^2 ln^3{n}}{mathsf{W}^2(d ln{n})}, frac{ud ln^4{n}}{mathsf{W}^3(d ln{n})} right}.$ Moreover, our proposed scheme is capable of handling errors in test outcomes.
The task of non-adaptive group testing is to identify up to $d$ defective items from $N$ items, where a test is positive if it contains at least one defective item, and negative otherwise. If there are $t$ tests, they can be represented as a $t times N$ measurement matrix. We have answered the question of whether there exists a scheme such that a larger measurement matrix, built from a given $ttimes N$ measurement matrix, can be used to identify up to $d$ defective items in time $O(t log_2{N})$. In the meantime, a $t times N$ nonrandom measurement matrix with $t = O left(frac{d^2 log_2^2{N}}{(log_2(dlog_2{N}) - log_2{log_2(dlog_2{N})})^2} right)$ can be obtained to identify up to $d$ defective items in time $mathrm{poly}(t)$. This is much better than the best well-known bound, $t = O left( d^2 log_2^2{N} right)$. For the special case $d = 2$, there exists an efficient nonrandom construction in which at most two defective items can be identified in time $4log_2^2{N}$ using $t = 4log_2^2{N}$ tests. Numerical results show that our proposed scheme is more practical than existing ones, and experimental results confirm our theoretical analysis. In particular, up to $2^{7} = 128$ defective items can be identified in less than $16$s even for $N = 2^{100}$.
Identification of up to $d$ defective items and up to $h$ inhibitors in a set of $n$ items is the main task of non-adaptive group testing with inhibitors. To efficiently reduce the cost of this Herculean task, a subset of the $n$ items is formed and then tested. This is called textit{group testing}. A test outcome on a subset of items is positive if the subset contains at least one defective item and no inhibitors, and negative otherwise. We present two decoding schemes for efficiently identifying the defective items and the inhibitors in the presence of $e$ erroneous outcomes in time $mathsf{poly}(d, h, e, log_2{n})$, which is sublinear to the number of items $n$. This decoding complexity significantly improves the state-of-the-art schemes in which the decoding time is linear to the number of items $n$, i.e., $mathsf{poly}(d, h, e, n)$. Moreover, each column of the measurement matrices associated with the proposed schemes can be nonrandomly generated in polynomial order of the number of rows. As a result, one can save space for storing them. Simulation results confirm our theoretical analysis. When the number of items is sufficiently large, the decoding time in our proposed scheme is smallest in comparison with existing work. In addition, when some erroneous outcomes are allowed, the number of tests in the proposed scheme is often smaller than the number of tests in existing work.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا