ترغب بنشر مسار تعليمي؟ اضغط هنا

Analyzing interferometric observations of strong gravitational lenses with recurrent and convolutional neural networks

97   0   0.0 ( 0 )
 نشر من قبل Warren Morningstar
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We use convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to estimate the parameters of strong gravitational lenses from interferometric observations. We explore multiple strategies and find that the best results are obtained when the effects of the dirty beam are first removed from the images with a deconvolution performed with an RNN-based structure before estimating the parameters. For this purpose, we use the recurrent inference machine (RIM) introduced in Putzky & Welling (2017). This provides a fast and automated alternative to the traditional CLEAN algorithm. We obtain the uncertainties of the estimated parameters using variational inference with Bernoulli distributions. We test the performance of the networks with a simulated test dataset as well as with five ALMA observations of strong lenses. For the observed ALMA data we compare our estimates with values obtained from a maximum-likelihood lens modeling method which operates in the visibility space and find consistent results. We show that we can estimate the lensing parameters with high accuracy using a combination of an RNN structure performing image deconvolution and a CNN performing lensing analysis, with uncertainties less than a factor of two higher than those achieved with maximum-likelihood methods. Including the deconvolution procedure performed by RIM, a single evaluation can be done in about a second on a single GPU, providing a more than six orders of magnitude increase in analysis speed while using about eight orders of magnitude less computational resources compared to maximum-likelihood lens modeling in the uv-plane. We conclude that this is a promising method for the analysis of mm and cm interferometric data from current facilities (e.g., ALMA, JVLA) and future large interferometric observatories (e.g., SKA), where an analysis in the uv-plane could be difficult or unfeasible.


قيم البحث

اقرأ أيضاً

Convolutional Neural Networks (ConvNets) are one of the most promising methods for identifying strong gravitational lens candidates in survey data. We present two ConvNet lens-finders which we have trained with a dataset composed of real galaxies fro m the Kilo Degree Survey (KiDS) and simulated lensed sources. One ConvNet is trained with single textit{r}-band galaxy images, hence basing the classification mostly on the morphology. While the other ConvNet is trained on textit{g-r-i} composite images, relying mostly on colours and morphology. We have tested the ConvNet lens-finders on a sample of 21789 Luminous Red Galaxies (LRGs) selected from KiDS and we have analyzed and compared the results with our previous ConvNet lens-finder on the same sample. The new lens-finders achieve a higher accuracy and completeness in identifying gravitational lens candidates, especially the single-band ConvNet. Our analysis indicates that this is mainly due to improved simulations of the lensed sources. In particular, the single-band ConvNet can select a sample of lens candidates with $sim40%$ purity, retrieving 3 out of 4 of the confirmed gravitational lenses in the LRG sample. With this particular setup and limited human intervention, it will be possible to retrieve, in future surveys such as Euclid, a sample of lenses exceeding in size the total number of currently known gravitational lenses.
We present a systematic search for wide-separation (Einstein radius >1.5), galaxy-scale strong lenses in the 30 000 sq.deg of the Pan-STARRS 3pi survey on the Northern sky. With long time delays of a few days to weeks, such systems are particularly w ell suited for catching strongly lensed supernovae with spatially-resolved multiple images and open new perspectives on early-phase supernova spectroscopy and cosmography. We produce a set of realistic simulations by painting lensed COSMOS sources on Pan-STARRS image cutouts of lens luminous red galaxies with known redshift and velocity dispersion from SDSS. First of all, we compute the photometry of mock lenses in gri bands and apply a simple catalog-level neural network to identify a sample of 1050207 galaxies with similar colors and magnitudes as the mocks. Secondly, we train a convolutional neural network (CNN) on Pan-STARRS gri image cutouts to classify this sample and obtain sets of 105760 and 12382 lens candidates with scores pCNN>0.5 and >0.9, respectively. Extensive tests show that CNN performances rely heavily on the design of lens simulations and choice of negative examples for training, but little on the network architecture. Finally, we visually inspect all galaxies with pCNN>0.9 to assemble a final set of 330 high-quality newly-discovered lens candidates while recovering 23 published systems. For a subset, SDSS spectroscopy on the lens central regions proves our method correctly identifies lens LRGs at z~0.1-0.7. Five spectra also show robust signatures of high-redshift background sources and Pan-STARRS imaging confirms one of them as a quadruply-imaged red source at z_s = 1.185 strongly lensed by a foreground LRG at z_d = 0.3155. In the future, we expect that the efficient and automated two-step classification method presented in this paper will be applicable to the deeper gri stacks from the LSST with minor adjustments.
The volume of data that will be produced by new-generation surveys requires automatic classification methods to select and analyze sources. Indeed, this is the case for the search for strong gravitational lenses, where the population of the detectabl e lensed sources is only a very small fraction of the full source population. We apply for the first time a morphological classification method based on a Convolutional Neural Network (CNN) for recognizing strong gravitational lenses in $255$ square degrees of the Kilo Degree Survey (KiDS), one of the current-generation optical wide surveys. The CNN is currently optimized to recognize lenses with Einstein radii $gtrsim 1.4$ arcsec, about twice the $r$-band seeing in KiDS. In a sample of $21789$ colour-magnitude selected Luminous Red Galaxies (LRG), of which three are known lenses, the CNN retrieves 761 strong-lens candidates and correctly classifies two out of three of the known lenses. The misclassified lens has an Einstein radius below the range on which the algorithm is trained. We down-select the most reliable 56 candidates by a joint visual inspection. This final sample is presented and discussed. A conservative estimate based on our results shows that with our proposed method it should be possible to find $sim100$ massive LRG-galaxy lenses at $zlsim 0.4$ in KiDS when completed. In the most optimistic scenario this number can grow considerably (to maximally $sim$2400 lenses), when widening the colour-magnitude selection and training the CNN to recognize smaller image-separation lens systems.
Future large-scale surveys with high resolution imaging will provide us with a few $10^5$ new strong galaxy-scale lenses. These strong lensing systems however will be contained in large data amounts which are beyond the capacity of human experts to v isually classify in a unbiased way. We present a new strong gravitational lens finder based on convolutional neural networks (CNNs). The method was applied to the Strong Lensing challenge organised by the Bologna Lens Factory. It achieved first and third place respectively on the space-based data-set and the ground-based data-set. The goal was to find a fully automated lens finder for ground-based and space-based surveys which minimizes human inspect. We compare the results of our CNN architecture and three new variations (invariant views and residual) on the simulated data of the challenge. Each method has been trained separately 5 times on 17 000 simulated images, cross-validated using 3 000 images and then applied to a 100 000 image test set. We used two different metrics for evaluation, the area under the receiver operating characteristic curve (AUC) score and the recall with no false positive ($mathrm{Recall}_{mathrm{0FP}}$). For ground based data our best method achieved an AUC score of $0.977$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.50$. For space-based data our best method achieved an AUC score of $0.940$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.32$. On space-based data adding dihedral invariance to the CNN architecture diminished the overall score but achieved a higher no contamination recall. We found that using committees of 5 CNNs produce the best recall at zero contamination and consistenly score better AUC than a single CNN. We found that for every variation of our CNN lensfinder, we achieve AUC scores close to $1$ within $6%$.
In this paper we develop a new unsupervised machine learning technique comprised of a feature extractor, a convolutional autoencoder (CAE), and a clustering algorithm consisting of a Bayesian Gaussian mixture model (BGM). We apply this technique to v isual band space-based simulated imaging data from the Euclid Space Telescope using data from the Strong Gravitational Lenses Finding Challenge. Our technique promisingly captures a variety of lensing features such as Einstein rings with different radii, distorted arc structures, etc, without using predefined labels. After the clustering process, we obtain several classification clusters separated by different visual features which are seen in the images. Our method successfully picks up $sim$63 percent of lensing images from all lenses in the training set. With the assumed probability proposed in this study, this technique reaches an accuracy of $77.25pm 0.48$% in binary classification using the training set. Additionally, our unsupervised clustering process can be used as the preliminary classification for future surveys of lenses to efficiently select targets and to speed up the labelling process. As the starting point of the astronomical application using this technique, we not only explore the application to gravitationally lensed systems, but also discuss the limitations and potential future uses of this technique.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا