Do you want to publish a course? Click here

Improving success probability and embedding efficiency in code based steganography

182   0   0.0 ( 0 )
 Added by Morgan Barbier
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

For stegoschemes arising from error correcting codes, embedding depends on a decoding map for the corresponding code. As decoding maps are usually not complete, embedding can fail. We propose a method to ensure or increase the probability of embedding success for these stegoschemes. This method is based on puncturing codes. We show how the use of punctured codes may also increase the embedding efficiency of the obtained stegoschemes.



rate research

Read More

68 - Jinming Wen , Chao Tong , Shi Bai 2018
Zero-forcing (ZF) decoder is a commonly used approximation solution of the integer least squares problem which arises in communications and many other applications. Numerically simulations have shown that the LLL reduction can usually improve the success probability $P_{ZF}$ of the ZF decoder. In this paper, we first rigorously show that both SQRD and V-BLAST, two commonly used lattice reductions, have no effect on $P_{ZF}$. Then, we show that LLL reduction can improve $P_{ZF}$ when $n=2$, we also analyze how the parameter $delta$ in the LLL reduction affects the enhancement of $P_{ZF}$. Finally, an example is given which shows that the LLL reduction decrease $P_{ZF}$ when $ngeq3$.
This paper is concerned with detecting an integer parameter vector inside a box from a linear model that is corrupted with a noise vector following the Gaussian distribution. One of the commonly used detectors is the maximum likelihood detector, which is obtained by solving a box-constrained integer least squares problem, that is NP-hard. Two other popular detectors are the box-constrained rounding and Babai detectors due to their high efficiency of implementation. In this paper, we first present formulas for the success probabilities (the probabilities of correct detection) of these three detectors for two different situations: the integer parameter vector is deterministic and is uniformly distributed over the constraint box. Then, we give two simple examples to respectively show that the success probability of the box-constrained rounding detector can be larger than that of the box-constrained Babai detector and the latter can be larger than the success probability of the maximum likelihood detector when the parameter vector is deterministic, and prove that the success probability of the box-constrained rounding detector is always not larger than that of the box-constrained Babai detector when the parameter vector is uniformly distributed over the constraint box. Some relations between the results for the box constrained and ordinary cases are presented, and two bounds on the success probability of the maximum likelihood detector, which can easily be computed, are developed. Finally, simulation results are provided to illustrate our main theoretical findings.
Driven by growing spectrum shortage, Long-term Evolution in unlicensed spectrum (LTE-U) has recently been proposed as a new paradigm to deliver better performance and experience for mobile users by extending the LTE protocol to unlicensed spectrum. In the paper, we first present a comprehensive overview of the LTE-U technology, and discuss the practical challenges it faces. We summarize the existing LTE-U operation modes and analyze several means for LTE-U coexistence with Wi-Fi medium access control protocols. We further propose a novel hyper access-point (HAP) that integrates the functionalities of LTE small cell base station and commercial Wi-Fi AP for deployment by cellular network operators. Our proposed LTE-U access embedding within the Wi-Fi protocol is non-disruptive to unlicensed Wi-Fi nodes and demonstrates performance benefits as a seamless and novel LTE and Wi-Fi coexistence technology in unlicensed band. We provide results to demonstrate the performances advantage of this novel LTE-U proposal.
282 - J. Rifa , L. Ronquillo 2010
Product perfect codes have been proven to enhance the performance of the F5 steganographic method, whereas perfect Z2Z4-linear codes have been recently introduced as an efficient way to embed data, conforming to the +/-1-steganography. In this paper, we present two steganographic methods. On the one hand, a generalization of product perfect codes is made. On the other hand, this generalization is applied to perfect Z2Z4-linear codes. Finally, the performance of the proposed methods is evaluated and compared with those of the aforementioned schemes.
A new computational private information retrieval (PIR) scheme based on random linear codes is presented. A matrix of messages from a McEliece scheme is used to query the server with carefully chosen errors. The server responds with the sum of the scalar multiple of the rows of the query matrix and the files. The user recovers the desired file by erasure decoding the response. Contrary to code-based cryptographic systems, the scheme presented here enables to use truly random codes, not only codes disguised as such. Further, we show the relation to the so-called error subspace search problem and quotient error search problem, which we assume to be difficult, and show that the scheme is secure against attacks based on solving these problems.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا