No Arabic abstract
Aims : We describe MS-MFS, a multi-scale multi-frequency deconvolution algorithm for wide-band synthesis-imaging, and present imaging results that illustrate the capabilities of the algorithm and the conditions under which it is feasible and gives accurate results. Methods : The MS-MFS algorithm models the wide-band sky-brightness distribution as a linear combination of spatial and spectral basis functions, and performs image-reconstruction by combining a linear-least-squares approach with iterative $chi^2$ minimization. This method extends and combines the ideas used in the MS-CLEAN and MF-CLEAN algorithms for multi-scale and multi-frequency deconvolution respectively, and can be used in conjunction with existing wide-field imaging algorithms. We also discuss a simpler hybrid of spectral-line and continuum imaging methods and point out situations where it may suffice. Results : We show via simulations and application to multi-frequency VLA data and wideband EVLA data, that it is possible to reconstruct both spatial and spectral structure of compact and extended emission at the continuum sensitivity level and at the angular resolution allowed by the highest sampled frequency.
Imaging in radio astronomy is usually carried out with a single-dish radio telescope doing a raster scan of a region of the sky or with an interferometer that samples the visibility function of the sky brightness. Mosaic observations are the current standard for imaging large fields of view with an interferometer and multi-frequency observations are now routinely carried out with both types of telescopes to increase the continuum imaging sensitivity and to probe spectral structure. This paper describes an algorithm to combine wideband data from these two types of telescopes in a joint iterative reconstruction scheme that can be applied to spectral cube or wideband multi-term imaging both for narrow fields of view as well as mosaics. Our results demonstrate the ability to prevent instabilities and error that typically arise when wide-band or joint mosaicing algorithms are presented with spatial and spectral structure that is inadequetely sampled by the interferometer alone. For comparable noise levels in the single dish and interferometer data, the numerical behaviour of this algorithm is expected to be similar to the idea of generating artificial visibilities from single dish data. However, our discussed implementation is simpler and more flexible in terms of applying relative data weighting schemes to match noise levels while preserving flux accuracy, fits within standard iterative image reconstruction frameworks, is fully compatible with wide-field and joint mosaicing gridding algorithms that apply corrections specific to the interferometer data and may be configured to enable spectral cube and wideband multi-term deconvolution for single-dish data alone.
We introduce the Fast Holographic Deconvolution method for analyzing interferometric radio data. Our new method is an extension of A-projection/software-holography/forward modeling analysis techniques and shares their precision deconvolution and widefield polarimetry, while being significantly faster than current implementations that use full direction-dependent antenna gains. Using data from the MWA 32 antenna prototype, we demonstrate the effectiveness and precision of our new algorithm. Fast Holographic Deconvolution may be particularly important for upcoming 21 cm cosmology observations of the Epoch of Reionization and Dark Energy where foreground subtraction is intimately related to the precision of the data reduction.
We consider the probe of astrophysical signals through radio interferometers with small field of view and baselines with non-negligible and constant component in the pointing direction. In this context, the visibilities measured essentially identify with a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. In light of the recent theory of compressed sensing and in the perspective of defining the best possible imaging techniques for sparse signals, we analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary. Our results rely both on theoretical considerations related to the mutual coherence between the sparsity and sensing dictionaries, as well as on numerical simulations.
BICEP Array is the newest multi-frequency instrument in the BICEP/Keck Array program. It is comprised of four 550 mm aperture refractive telescopes observing the polarization of the cosmic microwave background (CMB) at 30/40, 95, 150 and 220/270 GHz with over 30,000 detectors. We present an overview of the receiver, detailing the optics, thermal, mechanical, and magnetic shielding design. BICEP Array follows BICEP3s modular focal plane concept, and upgrades to 6 wafer to reduce fabrication with higher detector count per module. The first receiver at 30/40 GHz is expected to start observing at the South Pole during the 2019-20 season. By the end of the planned BICEP Array program, we project $sigma(r) sim 0.003$, assuming current modeling of polarized Galactic foreground and depending on the level of delensing that can be achieved with higher resolution maps from the South Pole Telescope.
In radio interferometry imaging, the gridding procedure of convolving visibilities with a chosen gridding function is necessary to transform visibility values into uniformly sampled grid points. We propose here a parameterised family of least-misfit gridding functions which minimise an upper bound on the difference between the DFT and FFT dirty images for a given gridding support width and image cropping ratio. When compared with the widely used spheroidal function with similar parameters, these provide more than 100 times better alias suppression and RMS misfit reduction over the usable dirty map. We discuss how appropriate parameter selection and tabulation of these functions allow for a balance between accuracy, computational cost and storage size. Although it is possible to reduce the errors introduced in the gridding or degridding process to the level of machine precision, accuracy comparable to that achieved by CASA requires only a lookup table with 300 entries and a support width of 3, allowing for a greatly reduced computation cost for a given performance.