ترغب بنشر مسار تعليمي؟ اضغط هنا

Sparse representations and convex optimization as tools for LOFAR radio interferometric imaging

294   0   0.0 ( 0 )
 نشر من قبل Julien N. Girard
 تاريخ النشر 2015
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Compressed sensing theory is slowly making its way to solve more and more astronomical inverse problems. We address here the application of sparse representations, convex optimization and proximal theory to radio interferometric imaging. First, we expose the theory behind interferometric imaging, sparse representations and convex optimization, and second, we illustrate their application with numerical tests with SASIR, an implementation of the FISTA, a Forward-Backward splitting algorithm hosted in a LOFAR imager. Various tests have been conducted in Garsden et al., 2015. The main results are: i) an improved angular resolution (super resolution of a factor ~2) with point sources as compared to CLEAN on the same data, ii) correct photometry measurements on a field of point sources at high dynamic range and iii) the imaging of extended sources with improved fidelity. SASIR provides better reconstructions (five time less residuals) of the extended emission as compared to CLEAN. With the advent of large radiotelescopes, there is scope for improving classical imaging methods with convex optimization methods combined with sparse representations.



قيم البحث

اقرأ أيضاً

We study the impact of the spread spectrum effect in radio interferometry on the quality of image reconstruction. This spread spectrum effect will be induced by the wide field-of-view of forthcoming radio interferometric telescopes. The resulting chi rp modulation improves the quality of reconstructed interferometric images by increasing the incoherence of the measurement and sparsity dictionaries. We extend previous studies of this effect to consider the more realistic setting where the chirp modulation varies for each visibility measurement made by the telescope. In these first preliminary results, we show that for this setting the quality of reconstruction improves significantly over the case without chirp modulation and achieves almost the reconstruction quality of the case of maximal, constant chirp modulation.
Next generation radio telescopes, like the Square Kilometre Array, will acquire an unprecedented amount of data for radio astronomy. The development of fast, parallelisable or distributed algorithms for handling such large-scale data sets is of prime importance. Motivated by this, we investigate herein a convex optimisation algorithmic structure, based on primal-dual forward-backward iterations, for solving the radio interferometric imaging problem. It can encompass any convex prior of interest. It allows for the distributed processing of the measured data and introduces further flexibility by employing a probabilistic approach for the selection of the data blocks used at a given iteration. We study the reconstruction performance with respect to the data distribution and we propose the use of nonuniform probabilities for the randomised updates. Our simulations show the feasibility of the randomisation given a limited computing infrastructure as well as important computational advantages when compared to state-of-the-art algorithmic structures.
Next-generation radio interferometric telescopes will exhibit non-coplanar baseline configurations and wide field-of-views, inducing a w-modulation of the sky image, which in turn induces the spread spectrum effect. We revisit the impact of this effe ct on imaging quality and study a new algorithmic strategy to deal with the associated operator in the image reconstruction process. In previous studies it has been shown that image recovery in the framework of compressed sensing is improved due to the spread spectrum effect, where the w-modulation can act to increase the incoherence between measurement and sparsifying signal representations. For the purpose of computational efficiency, idealised experiments were performed, where only a constant baseline component w in the pointing direction of the telescope was considered. We extend this analysis to the more realistic setting where the w-component varies for each visibility measurement. Firstly, incorporating varying w-components into imaging algorithms is a computational demanding task. We propose a variant of the w-projection algorithm for this purpose, which is based on an adaptive sparsification procedure, and incorporate it in compressed sensing imaging methods. This sparse matrix variant of the w-projection algorithm is generic and adapts to the support of each kernel. Consequently, it is applicable for all types of direction-dependent effects. Secondly, we show that for w-modulation with varying w-components, reconstruction quality is significantly improved compared to the setting where there is no w-modulation (i.e. w=0), reaching levels comparable to the quality of a constant, maximal w-component. This finding confirms that one may seek to optimise future telescope configurations to promote large w-components, thus enhancing the spread spectrum effect and consequently the fidelity of image reconstruction.
Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brigh tness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods Aims. Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the compressed sensing (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework Methods. We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data Results. We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions. Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA
The redshifted 21 cm line of neutral hydrogen is a promising probe of the Epoch of Reionization (EoR). However, its detection requires a thorough understanding and control of the systematic errors. We study two systematic biases observed in the LOFAR EoR residual data after calibration and subtraction of bright discrete foreground sources. The first effect is a suppression in the diffuse foregrounds, which could potentially mean a suppression of the 21 cm signal. The second effect is an excess of noise beyond the thermal noise. The excess noise shows fluctuations on small frequency scales, and hence it can not be easily removed by foreground removal or avoidance methods. Our analysis suggests that sidelobes of residual sources due to the chromatic point spread function and ionospheric scintillation can not be the dominant causes of the excess noise. Rather, both the suppression of diffuse foregrounds and the excess noise can occur due to calibration with an incomplete sky model containing predominantly bright discrete sources. We show that calibrating only on bright sources can cause suppression of other signals and introduce an excess noise in the data. The levels of the suppression and excess noise depend on the relative flux of sources which are not included in the model with respect to the flux of modeled sources. We discuss possible solutions such as using only long baselines to calibrate the interferometric gain solutions as well as simultaneous multi-frequency calibration along with their benefits and shortcomings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا