Do you want to publish a course? Click here

Comparison of classical and Bayesian imaging in radio interferometry

105   0   0.0 ( 0 )
 Added by Philipp Arras
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

CLEAN, the commonly employed imaging algorithm in radio interferometry, suffers from a number of shortcomings: in its basic version it does not have the concept of diffuse flux, and the common practice of convolving the CLEAN components with the CLEAN beam erases the potential for super-resolution; it does not output uncertainty information; it produces images with unphysical negative flux regions; and its results are highly dependent on the so-called weighting scheme as well as on any human choice of CLEAN masks to guiding the imaging. Here, we present the Bayesian imaging algorithm resolve which solves the above problems and naturally leads to super-resolution. We take a VLA observation of Cygnus~A at four different frequencies and image it with single-scale CLEAN, multi-scale CLEAN and resolve. Alongside the sky brightness distribution resolve estimates a baseline-dependent correction function for the noise budget, the Bayesian equivalent of weighting schemes. We report noise correction factors between 0.4 and 429. The enhancements achieved by resolve come at the cost of higher computational effort.



rate research

Read More

In radio interferometry imaging, the gridding procedure of convolving visibilities with a chosen gridding function is necessary to transform visibility values into uniformly sampled grid points. We propose here a parameterised family of least-misfit gridding functions which minimise an upper bound on the difference between the DFT and FFT dirty images for a given gridding support width and image cropping ratio. When compared with the widely used spheroidal function with similar parameters, these provide more than 100 times better alias suppression and RMS misfit reduction over the usable dirty map. We discuss how appropriate parameter selection and tabulation of these functions allow for a balance between accuracy, computational cost and storage size. Although it is possible to reduce the errors introduced in the gridding or degridding process to the level of machine precision, accuracy comparable to that achieved by CASA requires only a lookup table with 300 entries and a support width of 3, allowing for a greatly reduced computation cost for a given performance.
128 - Y. Wiaux , G. Puy , Y. Boursier 2009
We consider the probe of astrophysical signals through radio interferometers with small field of view and baselines with non-negligible and constant component in the pointing direction. In this context, the visibilities measured essentially identify with a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. In light of the recent theory of compressed sensing and in the perspective of defining the best possible imaging techniques for sparse signals, we analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary. Our results rely both on theoretical considerations related to the mutual coherence between the sparsity and sensing dictionaries, as well as on numerical simulations.
With the development of modern radio interferometers, wide-field continuum surveys have been planned and undertaken, for which accurate wide-field imaging methods are essential. Based on the widely-used W-stacking method, we propose a new wide-field imaging algorithm that can synthesize visibility data from a model of the sky brightness via degridding, able to construct dirty maps from measured visibility data via gridding. Results carry the smallest approximation error yet achieved relative to the exact calculation involving the direct Fourier transform. In contrast to the original W-stacking method, the new algorithm performs least-misfit optimal gridding (and degridding) in all three directions, and is capable of achieving much higher accuracy than is feasible with the original algorithm. In particular, accuracy at the level of single precision arithmetic is readily achieved by choosing a least-misfit convolution function of width W=7 and an image cropping parameter of x_0=0.25. If the accuracy required is only that attained by the original W-stacking method, the computational cost for both the gridding and FFT steps can be substantially reduced using the proposed method by making an appropriate choice of the width and image cropping parameters.
Aims : We describe MS-MFS, a multi-scale multi-frequency deconvolution algorithm for wide-band synthesis-imaging, and present imaging results that illustrate the capabilities of the algorithm and the conditions under which it is feasible and gives accurate results. Methods : The MS-MFS algorithm models the wide-band sky-brightness distribution as a linear combination of spatial and spectral basis functions, and performs image-reconstruction by combining a linear-least-squares approach with iterative $chi^2$ minimization. This method extends and combines the ideas used in the MS-CLEAN and MF-CLEAN algorithms for multi-scale and multi-frequency deconvolution respectively, and can be used in conjunction with existing wide-field imaging algorithms. We also discuss a simpler hybrid of spectral-line and continuum imaging methods and point out situations where it may suffice. Results : We show via simulations and application to multi-frequency VLA data and wideband EVLA data, that it is possible to reconstruct both spatial and spectral structure of compact and extended emission at the continuum sensitivity level and at the angular resolution allowed by the highest sampled frequency.
Demographic studies of cosmic populations must contend with measurement errors and selection effects. We survey some of the key ideas astronomers have developed to deal with these complications, in the context of galaxy surveys and the literature on corrections for Malmquist and Eddington bias. From the perspective of modern statistics, such corrections arise naturally in the context of multilevel models, particularly in Bayesian treatments of such models: hierarchical Bayesian models. We survey some key lessons from hierarchical Bayesian modeling, including shrinkage estimation, which is closely related to traditional corrections devised by astronomers. We describe a framework for hierarchical Bayesian modeling of cosmic populations, tailored to features of astronomical surveys that are not typical of surveys in other disciplines. This thinned latent marked point process framework accounts for the tie between selection (detection) and measurement in astronomical surveys, treating selection and measurement error effects in a self-consistent manner.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا