ترغب بنشر مسار تعليمي؟ اضغط هنا

Lossless Astronomical Image Compression and the Effects of Noise

273   0   0.0 ( 0 )
 نشر من قبل William Pence
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We compare a variety of lossless image compression methods on a large sample of astronomical images and show how the compression ratios and speeds of the algorithms are affected by the amount of noise in the images. In the ideal case where the image pixel values have a random Gaussian distribution, the equivalent number of uncompressible noise bits per pixel is given by Nbits =log2(sigma * sqrt(12)) and the lossless compression ratio is given by R = BITPIX / Nbits + K where BITPIX is the bit length of the pixel values and K is a measure of the efficiency of the compression algorithm. We perform image compression tests on a large sample of integer astronomical CCD images using the GZIP compression program and using a newer FITS tiled-image compression method that currently supports 4 compression algorithms: Rice, Hcompress, PLIO, and GZIP. Overall, the Rice compression algorithm strikes the best balance of compression and computational efficiency; it is 2--3 times faster and produces about 1.4 times greater compression than GZIP. The Rice algorithm produces 75%--90% (depending on the amount of noise in the image) as much compression as an ideal algorithm with K = 0. The image compression and uncompression utility programs used in this study (called fpack and funpack) are publicly available from the HEASARC web site. A simple command-line interface may be used to compress or uncompress any FITS image file.



قيم البحث

اقرأ أيضاً

We introduce a simple and efficient lossless image compression algorithm. We store a low resolution version of an image as raw pixels, followed by several iterations of lossless super-resolution. For lossless super-resolution, we predict the probabil ity of a high-resolution image, conditioned on the low-resolution input, and use entropy coding to compress this super-resolution operator. Super-Resolution based Compression (SReC) is able to achieve state-of-the-art compression rates with practical runtimes on large datasets. Code is available online at https://github.com/caoscott/SReC.
422 - Gangtao Xin , Pingyi Fan 2020
Soft compression is a lossless image compression method, which is committed to eliminating coding redundancy and spatial redundancy at the same time by adopting locations and shapes of codebook to encode an image from the perspective of information t heory and statistical distribution. In this paper, we propose a new concept, compressible indicator function with regard to image, which gives a threshold about the average number of bits required to represent a location and can be used for revealing the performance of soft compression. We investigate and analyze soft compression for binary image, gray image and multi-component image by using specific algorithms and compressible indicator value. It is expected that the bandwidth and storage space needed when transmitting and storing the same kind of images can be greatly reduced by applying soft compression.
We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000. At the core of our method is a fully parallelizable hierarchical probabilistic model f or adaptive entropy coding which is optimized end-to-end for the compression task. In contrast to recent autoregressive discrete probabilistic models such as PixelCNN, our method i) models the image distribution jointly with learned auxiliary representations instead of exclusively modeling the image distribution in RGB space, and ii) only requires three forward-passes to predict all pixel probabilities instead of one for each pixel. As a result, L3C obtains over two orders of magnitude speedups when sampling compared to the fastest PixelCNN variant (Multiscale-PixelCNN). Furthermore, we find that learning the auxiliary representation is crucial and outperforms predefined auxiliary representations such as an RGB pyramid significantly.
We propose a novel joint lossy image and residual compression framework for learning $ell_infty$-constrained near-lossless image compression. Specifically, we obtain a lossy reconstruction of the raw image through lossy image compression and uniforml y quantize the corresponding residual to satisfy a given tight $ell_infty$ error bound. Suppose that the error bound is zero, i.e., lossless image compression, we formulate the joint optimization problem of compressing both the lossy image and the original residual in terms of variational auto-encoders and solve it with end-to-end training. To achieve scalable compression with the error bound larger than zero, we derive the probability model of the quantized residual by quantizing the learned probability model of the original residual, instead of training multiple networks. We further correct the bias of the derived probability model caused by the context mismatch between training and inference. Finally, the quantized residual is encoded according to the bias-corrected probability model and is concatenated with the bitstream of the compressed lossy image. Experimental results demonstrate that our near-lossless codec achieves the state-of-the-art performance for lossless and near-lossless image compression, and achieves competitive PSNR while much smaller $ell_infty$ error compared with lossy image codecs at high bit rates.
We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolu tional models to lossless compression, demonstrating a method to scale the VAE-based Bits-Back with ANS algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا