No Arabic abstract
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 -- 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
We compare a variety of lossless image compression methods on a large sample of astronomical images and show how the compression ratios and speeds of the algorithms are affected by the amount of noise in the images. In the ideal case where the image pixel values have a random Gaussian distribution, the equivalent number of uncompressible noise bits per pixel is given by Nbits =log2(sigma * sqrt(12)) and the lossless compression ratio is given by R = BITPIX / Nbits + K where BITPIX is the bit length of the pixel values and K is a measure of the efficiency of the compression algorithm. We perform image compression tests on a large sample of integer astronomical CCD images using the GZIP compression program and using a newer FITS tiled-image compression method that currently supports 4 compression algorithms: Rice, Hcompress, PLIO, and GZIP. Overall, the Rice compression algorithm strikes the best balance of compression and computational efficiency; it is 2--3 times faster and produces about 1.4 times greater compression than GZIP. The Rice algorithm produces 75%--90% (depending on the amount of noise in the image) as much compression as an ideal algorithm with K = 0. The image compression and uncompression utility programs used in this study (called fpack and funpack) are publicly available from the HEASARC web site. A simple command-line interface may be used to compress or uncompress any FITS image file.
Astronomical images from optical photometric surveys are typically contaminated with transient artifacts such as cosmic rays, satellite trails and scattered light. We have developed and tested an algorithm that removes these artifacts using a deep, artifact free, static sky coadd image built up through the median combination of point spread function (PSF) homogenized, overlapping single epoch images. Transient artifacts are detected and masked in each single epoch image through comparison with an artifact free, PSF-matched simulated image that is constructed using the PSF-corrected, model fitting catalog from the artifact free coadd image together with the position variable PSF model of the single epoch image. This approach works well not only for cleaning single epoch images with worse seeing than the PSF homogenized coadd, but also the traditionally much more challenging problem of cleaning single epoch images with better seeing. In addition to masking transient artifacts, we have developed an interpolation approach that uses the local PSF and performs well in removing artifacts whose widths are smaller than the PSF full width at half maximum, including cosmic rays, the peaks of saturated stars and bleed trails. We have tested this algorithm on Dark Energy Survey Science Verification data and present performance metrics. More generally, our algorithm can be applied to any survey which images the same part of the sky multiple times.
I present a software tool for solving the astrometry of astronomical images. The code puts emphasis on robustness against failures for correctly matching the sources in the image to a reference catalog, and on the stability of the solutions over the field of view (e.g., using orthogonal polynomials for the fitted transformation). The code was tested on over 50,000 images from various sources, including the Palomar Transient Factory (PTF) and the Zwicky Transient Facility (ZTF). The tested images equally represent low and high Galactic latitude fields and exhibit failure/bad-solution rate of <2x10^-5. Running on PTF 60-s integration images, and using the GAIA-DR2 as a reference catalog, the typical two-axes-combined astrometric root-mean square (RMS) is 14 mas at the bright end, presumably due to astrometric scintillation noise and systematic errors. I discuss the effects of seeing, airmass and the order of the transformation on the astrometric accuracy. The software, available online, is developed in MATLAB as part of an astronomical image processing environment and it can be run also as a stand-alone code.
The Flexible Image Transport System (FITS) standard has been a great boon to astronomy, allowing observatories, scientists and the public to exchange astronomical information easily. The FITS standard is, however, showing its age. Developed in the late 1970s the FITS authors made a number of implementation choices for the format that, while common at the time, are now seen to limit its utility with modern data. The authors of the FITS standard could not appreciate the challenges which we would be facing today in astronomical computing. Difficulties we now face include, but are not limited to, having to address the need to handle an expanded range of specialized data product types (data models), being more conducive to the networked exchange and storage of data, handling very large datasets and the need to capture significantly more complex metadata and data relationships. There are members of the community today who find some (or all) of these limitations unworkable, and have decided to move ahead with storing data in other formats. This reaction should be taken as a wakeup call to the FITS community to make changes in the FITS standard, or to see its usage fall. In this paper we detail some selected important problems which exist within the FITS standard today. It is not our intention to prescribe specific remedies to these issues; rather, we hope to call attention of the FITS and greater astronomical computing communities to these issues in the hopes that it will spur action to address them.
An approach to black hole quantization is proposed wherein it is assumed that quantum coherence is preserved. A consequence of this is that the Penrose diagram describing gravitational collapse will show the same topological structure as flat Minkowski space. After giving our motivations for such a quantization procedure we formulate the background field approximation, in which particles are divided into hard particles and soft particles. The background space-time metric depends both on the in-states and on the out-states. We present some model calculations and extensive discussions. In particular, we show, in the context of a toy model, that the $S$-matrix describing soft particles in the hard particle background of a collapsing star is unitary, nevertheless, the spectrum of particles is shown to be approximately thermal. We also conclude that there is an important topological constraint on functional integrals.