Do you want to publish a course? Click here

Perfect Z2Z4-linear codes in Steganography

138   0   0.0 ( 0 )
 Added by Lorena Ronquillo
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

Steganography is an information hiding application which aims to hide secret data imperceptibly into a commonly used media. Unfortunately, the theoretical hiding asymptotical capacity of steganographic systems is not attained by algorithms developed so far. In this paper, we describe a novel coding method based on Z2Z4-linear codes that conforms to +/-1-steganography, that is secret data is embedded into a cover message by distorting each symbol by one unit at most. This method solves some problems encountered by the most efficient methods known today, based on ternary Hamming codes. Finally, the performance of this new technique is compared with that of the mentioned methods and with the well-known theoretical upper bound.



rate research

Read More

282 - J. Rifa , L. Ronquillo 2010
Product perfect codes have been proven to enhance the performance of the F5 steganographic method, whereas perfect Z2Z4-linear codes have been recently introduced as an efficient way to embed data, conforming to the +/-1-steganography. In this paper, we present two steganographic methods. On the one hand, a generalization of product perfect codes is made. On the other hand, this generalization is applied to perfect Z2Z4-linear codes. Finally, the performance of the proposed methods is evaluated and compared with those of the aforementioned schemes.
A code ${cal C}$ is $Z_2Z_4$-additive if the set of coordinates can be partitioned into two subsets $X$ and $Y$ such that the punctured code of ${cal C}$ by deleting the coordinates outside $X$ (respectively, $Y$) is a binary linear code (respectively, a quaternary linear code). In this paper $Z_2Z_4$-additive codes are studied. Their corresponding binary images, via the Gray map, are $Z_2Z_4$-linear codes, which seem to be a very distinguished class of binary group codes. As for binary and quaternary linear codes, for these codes the fundamental parameters are found and standard forms for generator and parity check matrices are given. For this, the appropriate inner product is deduced and the concept of duality for $Z_2Z_4$-additive codes is defined. Moreover, the parameters of the dual codes are computed. Finally, some conditions for self-duality of $Z_2Z_4$-additive codes are given.
Linear codes with a few weights have important applications in authentication codes, secret sharing, consumer electronics, etc.. The determination of the parameters such as Hamming weight distributions and complete weight enumerators of linear codes are important research topics. In this paper, we consider some classes of linear codes with a few weights and determine the complete weight enumerators from which the corresponding Hamming weight distributions are derived with help of some sums involving Legendre symbol.
The Doob graph $D(m,n)$ is the Cartesian product of $m>0$ copies of the Shrikhande graph and $n$ copies of the complete graph of order $4$. Naturally, $D(m,n)$ can be represented as a Cayley graph on the additive group $(Z_4^2)^m times (Z_2^2)^{n} times Z_4^{n}$, where $n+n=n$. A set of vertices of $D(m,n)$ is called an additive code if it forms a subgroup of this group. We construct a $3$-parameter class of additive perfect codes in Doob graphs and show that the known necessary conditions of the existence of additive $1$-perfect codes in $D(m,n+n)$ are sufficient. Additionally, two quasi-cyclic additive $1$-perfect codes are constructed in $D(155,0+31)$ and $D(2667,0+127)$.
High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and inspired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the errorfloor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا