No Arabic abstract
DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watsons genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.
Being able to store and transmit human genome sequences is an important part in genomic research and industrial applications. The complete human genome has 3.1 billion base pairs (haploid), and storing the entire genome naively takes about 3 GB, which is infeasible for large scale usage. However, human genomes are highly redundant. Any given individuals genome would differ from another individuals genome by less than 1%. There are tools like DNAZip, which express a given genome sequence by only noting down the differences between the given sequence and a reference genome sequence. This allows losslessly compressing the given genome to ~ 4 MB in size. In this work, we demonstrate additional improvements on top of the DNAZip library, where we show an additional ~ 11% compression on top of DNAZips already impressive results. This would allow further savings in disk space and network costs for transmitting human genome sequences.
Modern image and video compression codes employ elaborate structures existing in such signals to encode them into few number of bits. Compressed sensing recovery algorithms on the other hand use such signals structures to recover them from few linear observations. Despite the steady progress in the field of compressed sensing, structures that are often used for signal recovery are still much simpler than those employed by state-of-the-art compression codes. The main goal of this paper is to bridge this gap through answering the following question: Can one employ a given compression code to build an efficient (polynomial time) compressed sensing recovery algorithm? In response to this question, the compression-based gradient descent (C-GD) algorithm is proposed. C-GD, which is a low-complexity iterative algorithm, is able to employ a generic compression code for compressed sensing and therefore elevates the scope of structures used in compressed sensing to those used by compression codes. The convergence performance of C-GD and its required number of measurements in terms of the rate-distortion performance of the compression code are theoretically analyzed. It is also shown that C-GD is robust to additive white Gaussian noise. Finally, the presented simulation results show that combining C-GD with commercial image compression codes such as JPEG2000 yields state-of-the-art performance in imaging applications.
The future wireless network, such as Centralized Radio Access Network (C-RAN), will need to deliver data rate about 100 to 1000 times the current 4G technology. For C-RAN based network architecture, there is a pressing need for tremendous enhancement of the effective data rate of the Common Public Radio Interface (CPRI). Compression of CPRI data is one of the potential enhancements. In this paper, we introduce a vector quantization based compression algorithm for CPRI links, utilizing Lloyd algorithm. Methods to vectorize the I/Q samples and enhanced initialization of Lloyd algorithm for codebook training are investigated for improved performance. Multi-stage vector quantization and unequally protected multi-group quantization are considered to reduce codebook search complexity and codebook size. Simulation results show that our solution can achieve compression of 4 times for uplink and 4.5 times for downlink, within 2% Error Vector Magnitude (EVM) distortion. Remarkably, vector quantization codebook proves to be quite robust against data modulation mismatch, fading, signal-to-noise ratio (SNR) and Doppler spread.
This article proposes a novel iterative algorithm based on Low Density Parity Check (LDPC) codes for compression of correlated sources at rates approaching the Slepian-Wolf bound. The setup considered in the article looks at the problem of compressing one source at a rate determined based on the knowledge of the mean source correlation at the encoder, and employing the other correlated source as side information at the decoder which decompresses the first source based on the estimates of the actual correlation. We demonstrate that depending on the extent of the actual source correlation estimated through an iterative paradigm, significant compression can be obtained relative to the case the decoder does not use the implicit knowledge of the existence of correlation.
This paper provides an extensive study of the behavior of the best achievable rate (and other related fundamental limits) in variable-length lossless compression. In the non-asymptotic regime, the fundamental limits of fixed-to-variable lossless compression with and without prefix constraints are shown to be tightly coupled. Several precise, quantitative bounds are derived, connecting the distribution of the optimal codelengths to the source information spectrum, and an exact analysis of the best achievable rate for arbitrary sources is given. Fine asymptotic results are proved for arbitrary (not necessarily prefix) compressors on general mixing sources. Non-asymptotic, explicit Gaussian approximation bounds are established for the best achievable rate on Markov sources. The source dispersion and the source varentropy rate are defined and characterized. Together with the entropy rate, the varentropy rate serves to tightly approximate the fundamental non-asymptotic limits of fixed-to-variable compression for all but very small blocklengths.