ترغب بنشر مسار تعليمي؟ اضغط هنا

The application of compressive sampling to radio astronomy I: Deconvolution

82   0   0.0 ( 0 )
 نشر من قبل Feng Li
 تاريخ النشر 2011
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the Hogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the Hogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.

قيم البحث

اقرأ أيضاً

Faraday rotation measure (RM) synthesis is an important tool to study and analyze galactic and extra-galactic magnetic fields. Since there is a Fourier relation between the Faraday dispersion function and the polarized radio emission, full reconstruc tion of the dispersion function requires knowledge of the polarized radio emission at both positive and negative square wavelengths $lambda^2$. However, one can only make observations for $lambda^2 > 0$. Furthermore observations are possible only for a limited range of wavelengths. Thus reconstructing the Faraday dispersion function from these limited measurements is ill-conditioned. In this paper, we propose three new reconstruction algorithms for RM synthesis based upon compressive sensing/sampling (CS). These algorithms are designed to be appropriate for Faraday thin sources only, thick sources only, and mixed sources respectively. Both visual and numerical results show that the new RM synthesis methods provide superior reconstructions of both magnitude and phase information than RM-CLEAN
110 - Sarod Yatawatta 2010
Deconvolution is essential for radio interferometric imaging to produce scientific quality data because of finite sampling in the Fourier plane. Most deconvolution algorithms are based on CLEAN which uses a grid of image pixels, or clean components. A critical matter in this process is the selection of pixel size for optimal results in deconvolution. As a rule of thumb, the pixel size is chosen smaller than the resolution dictated by the interferometer. For images consisting of unresolved (or point like) sources, this approach yields optimal results. However, for sources that are not point like, in particular for partially resolved sources, the selection of right pixel size is still an open issue. In this paper, we investigate the limitations of pixelization in deconvolving extended sources. In particular, we pursue the usage of orthonormal basis functions to model extended sources yielding better results than by using clean components.
We present an analysis of the effects of beam deconvolution on noise properties in CMB measurements. The analysis is built around the artDeco beam deconvolver code. We derive a low-resolution noise covariance matrix that describes the residual noise in deconvolution products, both in harmonic and pixel space. The matrix models the residual correlated noise that remains in time-ordered data after destriping, and the effect of deconvolution on it. To validate the results, we generate noise simulations that mimic the data from the Planck LFI instrument. A $chi^2$ test for the full 70 GHz covariance in multipole range $ell=0-50$ yields a mean reduced $chi^2$ of 1.0037. We compare two destriping options, full and independent destriping, when deconvolving subsets of available data. Full destriping leaves substantially less residual noise, but leaves data sets intercorrelated. We derive also a white noise covariance matrix that provides an approximation of the full noise at high multipoles, and study the properties on high-resolution noise in pixel space through simulations.
56 - Lisa Harvey-Smith 2012
The concept of a Square Kilometre Array was developed to ensure that progress in Radio Astronomy in the early 21st Century continued at the same impressive pace as was achieved during the first 50 years. The SKA telescope is designed to pave that roa d to greater and greater sensitivity. So what technical challenges does the project face and what key innovations will drive the success of the SKA? What will the next Radio Astronomy mega-science project look like? In this article the author discusses the likely avenues of progress in the coming decades and comments on the status of radio astronomy in 2049 - the authors 70th (and presumably her retirement) year.
Advances in astronomy are intimately linked to advances in digital signal processing (DSP). This special issue is focused upon advances in DSP within radio astronomy. The trend within that community is to use off-the-shelf digital hardware where poss ible and leverage advances in high performance computing. In particular, graphics processing units (GPUs) and field programmable gate arrays (FPGAs) are being used in place of application-specific circuits (ASICs); high-speed Ethernet and Infiniband are being used for interconnect in place of custom backplanes. Further, to lower hurdles in digital engineering, communities have designed and released general-purpose FPGA-based DSP systems, such as the CASPER ROACH board, ASTRON Uniboard and CSIRO Redback board. In this introductory article, we give a brief historical overview, a summary of recent trends, and provide an outlook on future directions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا