ترغب بنشر مسار تعليمي؟ اضغط هنا

PySE: Software for Extracting Sources from Radio Images

310   0   0.0 ( 0 )
 نشر من قبل Dario Carbone
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

PySE is a Python software package for finding and measuring sources in radio telescope images. The software was designed to detect sources in the LOFAR telescope images, but can be used with images from other radio telescopes as well. We introduce the LOFAR Telescope, the context within which PySE was developed, the design of PySE, and describe how it is used. Detailed experiments on the validation and testing of PySE are then presented, along with results of performance testing. We discuss some of the current issues with the algorithms implemented in PySE and their inter- action with LOFAR images, concluding with the current status of PySE and its future development.

قيم البحث

اقرأ أيضاً

With the arrival of a number of wide-field snapshot image-plane radio transient surveys, there will be a huge influx of images in the coming years making it impossible to manually analyse the datasets. Automated pipelines to process the information s tored in the images are being developed, such as the LOFAR Transients Pipeline, outputting light curves and various transient parameters. These pipelines have a number of tuneable parameters that require training to meet the survey requirements. This paper utilises both observed and simulated datasets to demonstrate different machine learning strategies that can be used to train these parameters. The datasets used are from LOFAR observations and we process the data using the LOFAR Transients Pipeline; however the strategies developed are applicable to any light curve datasets at different frequencies and can be adapted to different automated pipelines. These machine learning strategies are publicly available as Python tools that can be downloaded and adapted to different datasets (https://github.com/AntoniaR/TraP_ML_tools).
77 - B. Emonts 2019
CASA, the Common Astronomy Software Applications package, is the primary data processing software for the Atacama Large Millimeter/submillimeter Array (ALMA) and NSFs Karl G. Jansky Very Large Array (VLA), and is frequently used also for other radio telescopes. The CASA software can process data from both single-dish and aperture-synthesis telescopes, and one of its core functionalities is to support the data reduction and imaging pipelines for ALMA, VLA and the VLA Sky Survey (VLASS). CASA has recently undergone several exciting new developments, including an increased flexibility in Python (CASA 6), support of Very Long Baseline Interferometry (VLBI), performance gains through parallel imaging, data visualization with the new Cube Analysis Rendering Tool for Astronomy (CARTA), enhanced reliability and testing, and modernized documentation. These proceedings of the 2019 Astronomical Data Analysis Software & Systems (ADASS) conference give an update of the CASA project, and detail how these new developments will enhance user experience of CASA.
89 - K. Eckert 2020
For ground-based optical imaging with current CCD technology, the Poisson fluctuations in source and sky background photon arrivals dominate the noise budget and are readily estimated. Another component of noise, however, is the signal from the undet ected population of stars and galaxies. Using injection of artificial galaxies into images, we demonstrate that the measured variance of galaxy moments (used for weak gravitational lensing measurements) in Dark Energy Survey (DES) images is significantly in excess of the Poisson predictions, by up to 30%, and that the background sky levels are overestimated by current software. By cross-correlating distinct images of empty sky regions, we establish that there is a significant image noise contribution from undetected static sources (US), which on average are mildly resolved at DES resolution. Treating these US as a stationary noise source, we compute a correction to the moment covariance matrix expected from Poisson noise. The corrected covariance matrix matches the moment variances measured on the injected DES images to within 5%. Thus we have an empirical method to statistically account for US in weak lensing measurements, rather than requiring extremely deep sky simulations. We also find that local sky determinations can remove the bias in flux measurements, at a small penalty in additional, but quantifiable, noise.
Cross-matching catalogues from radio surveys to catalogues of sources at other wavelengths is extremely hard, because radio sources are often extended, often consist of several spatially separated components, and often no radio component is coinciden t with the optical/infrared host galaxy. Traditionally, the cross-matching is done by eye, but this does not scale to the millions of radio sources expected from the next generation of radio surveys. We present an innovative automated procedure, using Bayesian hypothesis testing, that models trial radio-source morphologies with putative positions of the host galaxy. This new algorithm differs from an earlier version by allowing more complex radio source morphologies, and performing a simultaneous fit over a large field. We show that this technique performs well in an unsupervised mode.
The Cherenkov Telescope Array (CTA) will be the worlds leading ground-based gamma-ray observatory allowing us to study very high energy phenomena in the Universe. CTA will produce huge data sets, of the order of petabytes, and the challenge is to fin d better alternative data analysis methods to the already existing ones. Machine learning algorithms, like deep learning techniques, give encouraging results in this direction. In particular, convolutional neural network methods on images have proven to be effective in pattern recognition and produce data representations which can achieve satisfactory predictions. We test the use of convolutional neural networks to discriminate signal from background images with high rejections factors and to provide reconstruction parameters from gamma-ray events. The networks are trained and evaluated on artificial data sets of images. The results show that neural networks trained with simulated data can be useful to extract gamma-ray information. Such networks would help us to make the best use of large quantities of real data coming in the next decades.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا