Do you want to publish a course? Click here

An Adaptive-Parity Error-Resilient LZ77 Compression Algorithm

83   0   0.0 ( 0 )
 Added by Tomaz Korosec
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

The paper proposes an improved error-resilient Lempel-Ziv77 (LZ77) algorithm employing an adaptive amount of parity bits for error protection. It is a modified version of error resilient algorithm LZRS77, proposed recently, which uses a constant amount of parity over all of the encoded blocks of data. The constant amount of parity is bounded by the lowest-redundancy part of the encoded string, whereas the adaptive parity more efficiently utilizes the available redundancy of the encoded string, and can be on average much higher. The proposed algorithm thus provides better error protection of encoded data. The performance of both algorithms was measured. The comparison showed a noticeable improvement by use of adaptive parity. The proposed algorithm is capable of correcting up to a few times as many errors as the original algorithm, while the compression performance remains practically unchanged.



rate research

Read More

Modern image and video compression codes employ elaborate structures existing in such signals to encode them into few number of bits. Compressed sensing recovery algorithms on the other hand use such signals structures to recover them from few linear observations. Despite the steady progress in the field of compressed sensing, structures that are often used for signal recovery are still much simpler than those employed by state-of-the-art compression codes. The main goal of this paper is to bridge this gap through answering the following question: Can one employ a given compression code to build an efficient (polynomial time) compressed sensing recovery algorithm? In response to this question, the compression-based gradient descent (C-GD) algorithm is proposed. C-GD, which is a low-complexity iterative algorithm, is able to employ a generic compression code for compressed sensing and therefore elevates the scope of structures used in compressed sensing to those used by compression codes. The convergence performance of C-GD and its required number of measurements in terms of the rate-distortion performance of the compression code are theoretically analyzed. It is also shown that C-GD is robust to additive white Gaussian noise. Finally, the presented simulation results show that combining C-GD with commercial image compression codes such as JPEG2000 yields state-of-the-art performance in imaging applications.
The future wireless network, such as Centralized Radio Access Network (C-RAN), will need to deliver data rate about 100 to 1000 times the current 4G technology. For C-RAN based network architecture, there is a pressing need for tremendous enhancement of the effective data rate of the Common Public Radio Interface (CPRI). Compression of CPRI data is one of the potential enhancements. In this paper, we introduce a vector quantization based compression algorithm for CPRI links, utilizing Lloyd algorithm. Methods to vectorize the I/Q samples and enhanced initialization of Lloyd algorithm for codebook training are investigated for improved performance. Multi-stage vector quantization and unequally protected multi-group quantization are considered to reduce codebook search complexity and codebook size. Simulation results show that our solution can achieve compression of 4 times for uplink and 4.5 times for downlink, within 2% Error Vector Magnitude (EVM) distortion. Remarkably, vector quantization codebook proves to be quite robust against data modulation mismatch, fading, signal-to-noise ratio (SNR) and Doppler spread.
Algebraic codes such as BCH code are receiving renewed interest as their short block lengths and low/no error floors make them attractive for ultra-reliable low-latency communications (URLLC) in 5G wireless networks. This paper aims at enhancing the traditional adaptive belief propagation (ABP) decoding, which is a soft-in-soft-out (SISO) decoding for high-density parity-check (HDPC) algebraic codes, such as Reed-Solomon (RS) codes, Bose-Chaudhuri-Hocquenghem (BCH) codes, and product codes. The key idea of traditional ABP is to sparsify certain columns of the parity-check matrix corresponding to the least reliable bits with small log-likelihood-ratio (LLR) values. This sparsification strategy may not be optimal when some bits have large LLR magnitudes but wrong signs. Motivated by this observation, we propose a Perturbed ABP (P-ABP) to incorporate a small number of unstable bits with large LLRs into the sparsification operation of the parity-check matrix. In addition, we propose to apply partial layered scheduling or hybrid dynamic scheduling to further enhance the performance of P-ABP. Simulation results show that our proposed decoding algorithms lead to improved error correction performances and faster convergence rates than the prior-art ABP variants.
The generalized approximate message passing (GAMP) algorithm is an efficient method of MAP or approximate-MMSE estimation of $x$ observed from a noisy version of the transform coefficients $z = Ax$. In fact, for large zero-mean i.i.d sub-Gaussian $A$, GAMP is characterized by a state evolution whose fixed points, when unique, are optimal. For generic $A$, however, GAMP may diverge. In this paper, we propose adaptive damping and mean-removal strategies that aim to prevent divergence. Numerical results demonstrate significantly enhanced robustness to non-zero-mean, rank-deficient, column-correlated, and ill-conditioned $A$.
This article proposes a novel iterative algorithm based on Low Density Parity Check (LDPC) codes for compression of correlated sources at rates approaching the Slepian-Wolf bound. The setup considered in the article looks at the problem of compressing one source at a rate determined based on the knowledge of the mean source correlation at the encoder, and employing the other correlated source as side information at the decoder which decompresses the first source based on the estimates of the actual correlation. We demonstrate that depending on the extent of the actual source correlation estimated through an iterative paradigm, significant compression can be obtained relative to the case the decoder does not use the implicit knowledge of the existence of correlation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا