ترغب بنشر مسار تعليمي؟ اضغط هنا

Classifying Complex Faraday Spectra with Convolutional Neural Networks

66   0   0.0 ( 0 )
 نشر من قبل Shea Brown
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Advances in radio spectro-polarimetry offer the possibility to disentangle complex regions where relativistic and thermal plasmas mix in the interstellar and intergalactic media. Recent work has shown that apparently simple Faraday Rotation Measure (RM) spectra can be generated by complex sources. This is true even when the distribution of RMs in the complex source greatly exceeds the errors associated with a single component fit to the peak of the Faraday spectrum. We present a convolutional neural network (CNN) that can differentiate between simple Faraday thin spectra and those that contain multiple or Faraday thick sources. We demonstrate that this CNN, trained for the upcoming Polarisation Sky Survey of the Universes Magnetism (POSSUM) early science observations, can identify two component sources 99% of the time, provided that the sources are separated in Faraday depth by $>$10% of the FWHM of the Faraday Point Spread Function, the polarized flux ratio of the sources is $>$0.1, and that the Signal-to-Noise radio (S/N) of the primary component is $>$5. With this S/N cut-off, the false positive rate (simple sources mis-classified as complex) is $<$0.3%. Work is ongoing to include Faraday thick sources in the training and testing of the CNN.

قيم البحث

اقرأ أيضاً

We present a novel application of partial convolutional neural networks (PCNN) that can inpaint masked images of the cosmic microwave background. The network can reconstruct both the maps and the power spectra to a few percent for circular and irregu larly shaped masks covering up to ~10% of the image area. By performing a Kolmogorov-Smirnov test we show that the reconstructed maps and power spectra are indistinguishable from the input maps and power spectra at the 99.9% level. Moreover, we show that PCNNs can inpaint maps with regular and irregular masks to the same accuracy. This should be particularly beneficial to inpaint irregular masks for the CMB that come from astrophysical sources such as galactic foregrounds. The proof of concept application shown in this paper shows that PCNNs can be an important tool in data analysis pipelines in cosmology.
In the preparation for ESAs Euclid mission and the large amount of data it will produce, we train deep convolutional neural networks on Euclid simulations classify solar system objects from other astronomical sources. Using transfer learning we are a ble to achieve a good performance despite our tiny dataset with as few as 7512 images. Our best model correctly identifies objects with a top accuracy of 94% and improves to 96% when Euclids dither information is included. The neural network misses ~50% of the slowest moving asteroids (v < 10 arcsec/h) but is otherwise able to correctly classify asteroids even down to 26 mag. We show that the same model also performs well at classifying stars, galaxies and cosmic rays, and could potentially be applied to distinguish all types of objects in the Euclid data and other large optical surveys.
We use convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to estimate the parameters of strong gravitational lenses from interferometric observations. We explore multiple strategies and find that the best results are obtained w hen the effects of the dirty beam are first removed from the images with a deconvolution performed with an RNN-based structure before estimating the parameters. For this purpose, we use the recurrent inference machine (RIM) introduced in Putzky & Welling (2017). This provides a fast and automated alternative to the traditional CLEAN algorithm. We obtain the uncertainties of the estimated parameters using variational inference with Bernoulli distributions. We test the performance of the networks with a simulated test dataset as well as with five ALMA observations of strong lenses. For the observed ALMA data we compare our estimates with values obtained from a maximum-likelihood lens modeling method which operates in the visibility space and find consistent results. We show that we can estimate the lensing parameters with high accuracy using a combination of an RNN structure performing image deconvolution and a CNN performing lensing analysis, with uncertainties less than a factor of two higher than those achieved with maximum-likelihood methods. Including the deconvolution procedure performed by RIM, a single evaluation can be done in about a second on a single GPU, providing a more than six orders of magnitude increase in analysis speed while using about eight orders of magnitude less computational resources compared to maximum-likelihood lens modeling in the uv-plane. We conclude that this is a promising method for the analysis of mm and cm interferometric data from current facilities (e.g., ALMA, JVLA) and future large interferometric observatories (e.g., SKA), where an analysis in the uv-plane could be difficult or unfeasible.
Vetting of exoplanet candidates in transit surveys is a manual process, which suffers from a large number of false positives and a lack of consistency. Previous work has shown that Convolutional Neural Networks (CNN) provide an efficient solution to these problems. Here, we apply a CNN to classify planet candidates from the Next Generation Transit Survey (NGTS). For training datasets we compare both real data with injected planetary transits and fully-simulated data, as well as how their different compositions affect network performance. We show that fewer hand labelled lightcurves can be utilised, while still achieving competitive results. With our best model, we achieve an AUC (area under the curve) score of $(95.6pm{0.2})%$ and an accuracy of $(88.5pm{0.3})%$ on our unseen test data, as well as $(76.5pm{0.4})%$ and $(74.6pm{1.1})%$ in comparison to our existing manual classifications. The neural network recovers 13 out of 14 confirmed planets observed by NGTS, with high probability. We use simulated data to show that the overall network performance is resilient to mislabelling of the training dataset, a problem that might arise due to unidentified, low signal-to-noise transits. Using a CNN, the time required for vetting can be reduced by half, while still recovering the vast majority of manually flagged candidates. In addition, we identify many new candidates with high probabilities which were not flagged by human vetters.
We present a novel method of classifying Type Ia supernovae using convolutional neural networks, a neural network framework typically used for image recognition. Our model is trained on photometric information only, eliminating the need for accurate redshift data. Photometric data is pre-processed via 2D Gaussian process regression into two-dimensional images created from flux values at each location in wavelength-time space. These flux heatmaps of each supernova detection, along with uncertainty heatmaps of the Gaussian process uncertainty, constitute the dataset for our model. This preprocessing step not only smooths over irregular sampling rates between filters but also allows SCONE to be independent of the filter set on which it was trained. Our model has achieved impressive performance without redshift on the in-distribution SNIa classification problem: $99.73 pm 0.26$% test accuracy with no over/underfitting on a subset of supernovae from PLAsTiCCs unblinded test dataset. We have also achieved $98.18 pm 0.3$% test accuracy performing 6-way classification of supernovae by type. The out-of-distribution performance does not fully match the in-distribution results, suggesting that the detailed characteristics of the training sample in comparison to the test sample have a big impact on the performance. We discuss the implication and directions for future work. All of the data processing and model code developed for this paper can be found in the SCONE software package located at github.com/helenqu/scone.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا