ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal gridding and degridding in radio interferometry imaging

63   0   0.0 ( 0 )
 نشر من قبل Haoyang Ye PhD
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

In radio interferometry imaging, the gridding procedure of convolving visibilities with a chosen gridding function is necessary to transform visibility values into uniformly sampled grid points. We propose here a parameterised family of least-misfit gridding functions which minimise an upper bound on the difference between the DFT and FFT dirty images for a given gridding support width and image cropping ratio. When compared with the widely used spheroidal function with similar parameters, these provide more than 100 times better alias suppression and RMS misfit reduction over the usable dirty map. We discuss how appropriate parameter selection and tabulation of these functions allow for a balance between accuracy, computational cost and storage size. Although it is possible to reduce the errors introduced in the gridding or degridding process to the level of machine precision, accuracy comparable to that achieved by CASA requires only a lookup table with 300 entries and a support width of 3, allowing for a greatly reduced computation cost for a given performance.

قيم البحث

اقرأ أيضاً

CLEAN, the commonly employed imaging algorithm in radio interferometry, suffers from a number of shortcomings: in its basic version it does not have the concept of diffuse flux, and the common practice of convolving the CLEAN components with the CLEA N beam erases the potential for super-resolution; it does not output uncertainty information; it produces images with unphysical negative flux regions; and its results are highly dependent on the so-called weighting scheme as well as on any human choice of CLEAN masks to guiding the imaging. Here, we present the Bayesian imaging algorithm resolve which solves the above problems and naturally leads to super-resolution. We take a VLA observation of Cygnus~A at four different frequencies and image it with single-scale CLEAN, multi-scale CLEAN and resolve. Alongside the sky brightness distribution resolve estimates a baseline-dependent correction function for the noise budget, the Bayesian equivalent of weighting schemes. We report noise correction factors between 0.4 and 429. The enhancements achieved by resolve come at the cost of higher computational effort.
86 - Y. Wiaux , G. Puy , Y. Boursier 2009
We consider the probe of astrophysical signals through radio interferometers with small field of view and baselines with non-negligible and constant component in the pointing direction. In this context, the visibilities measured essentially identify with a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. In light of the recent theory of compressed sensing and in the perspective of defining the best possible imaging techniques for sparse signals, we analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary. Our results rely both on theoretical considerations related to the mutual coherence between the sparsity and sensing dictionaries, as well as on numerical simulations.
With the development of modern radio interferometers, wide-field continuum surveys have been planned and undertaken, for which accurate wide-field imaging methods are essential. Based on the widely-used W-stacking method, we propose a new wide-field imaging algorithm that can synthesize visibility data from a model of the sky brightness via degridding, able to construct dirty maps from measured visibility data via gridding. Results carry the smallest approximation error yet achieved relative to the exact calculation involving the direct Fourier transform. In contrast to the original W-stacking method, the new algorithm performs least-misfit optimal gridding (and degridding) in all three directions, and is capable of achieving much higher accuracy than is feasible with the original algorithm. In particular, accuracy at the level of single precision arithmetic is readily achieved by choosing a least-misfit convolution function of width W=7 and an image cropping parameter of x_0=0.25. If the accuracy required is only that attained by the original W-stacking method, the computational cost for both the gridding and FFT steps can be substantially reduced using the proposed method by making an appropriate choice of the width and image cropping parameters.
Aims : We describe MS-MFS, a multi-scale multi-frequency deconvolution algorithm for wide-band synthesis-imaging, and present imaging results that illustrate the capabilities of the algorithm and the conditions under which it is feasible and gives ac curate results. Methods : The MS-MFS algorithm models the wide-band sky-brightness distribution as a linear combination of spatial and spectral basis functions, and performs image-reconstruction by combining a linear-least-squares approach with iterative $chi^2$ minimization. This method extends and combines the ideas used in the MS-CLEAN and MF-CLEAN algorithms for multi-scale and multi-frequency deconvolution respectively, and can be used in conjunction with existing wide-field imaging algorithms. We also discuss a simpler hybrid of spectral-line and continuum imaging methods and point out situations where it may suffice. Results : We show via simulations and application to multi-frequency VLA data and wideband EVLA data, that it is possible to reconstruct both spatial and spectral structure of compact and extended emission at the continuum sensitivity level and at the angular resolution allowed by the highest sampled frequency.
We introduce a method for analyzing radio interferometry data which produces maps which are optimal in the Bayesian sense of maximum posterior probability density, given certain prior assumptions. It is similar to maximum entropy techniques, but with an exact accounting of the multiplicity instead of the usual approximation involving Stirlings formula. It also incorporates an Occam factor, automatically limiting the effective amount of detail in the map to that justified by the data. We use Gibbs sampling to determine, to any desired degree of accuracy, the multi-dimensional posterior density distribution. From this we can construct a mean posterior map and other measures of the posterior density, including confidence limits on any well-defined function of the posterior map.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا