No Arabic abstract
Convolutional Neural Networks (ConvNets) are one of the most promising methods for identifying strong gravitational lens candidates in survey data. We present two ConvNet lens-finders which we have trained with a dataset composed of real galaxies from the Kilo Degree Survey (KiDS) and simulated lensed sources. One ConvNet is trained with single textit{r}-band galaxy images, hence basing the classification mostly on the morphology. While the other ConvNet is trained on textit{g-r-i} composite images, relying mostly on colours and morphology. We have tested the ConvNet lens-finders on a sample of 21789 Luminous Red Galaxies (LRGs) selected from KiDS and we have analyzed and compared the results with our previous ConvNet lens-finder on the same sample. The new lens-finders achieve a higher accuracy and completeness in identifying gravitational lens candidates, especially the single-band ConvNet. Our analysis indicates that this is mainly due to improved simulations of the lensed sources. In particular, the single-band ConvNet can select a sample of lens candidates with $sim40%$ purity, retrieving 3 out of 4 of the confirmed gravitational lenses in the LRG sample. With this particular setup and limited human intervention, it will be possible to retrieve, in future surveys such as Euclid, a sample of lenses exceeding in size the total number of currently known gravitational lenses.
We search Dark Energy Survey (DES) Year 3 imaging data for galaxy-galaxy strong gravitational lenses using convolutional neural networks. We generate 250,000 simulated lenses at redshifts > 0.8 from which we create a data set for training the neural networks with realistic seeing, sky and shot noise. Using the simulations as a guide, we build a catalogue of 1.1 million DES sources with 1.8 < g - i < 5, 0.6 < g -r < 3, r_mag > 19, g_mag > 20 and i_mag > 18.2. We train two ensembles of neural networks on training sets consisting of simulated lenses, simulated non-lenses, and real sources. We use the neural networks to score images of each of the sources in our catalogue with a value from 0 to 1, and select those with scores greater than a chosen threshold for visual inspection, resulting in a candidate set of 7,301 galaxies. During visual inspection we rate 84 as probably or definitely lenses. Four of these are previously known lenses or lens candidates. We inspect a further 9,428 candidates with a different score threshold, and identify four new candidates. We present 84 new strong lens candidates, selected after a few hours of visual inspection by astronomers. This catalogue contains a comparable number of high-redshift lenses to that predicted by simulations. Based on simulations we estimate our sample to contain most discoverable lenses in this imaging and at this redshift range.
The volume of data that will be produced by new-generation surveys requires automatic classification methods to select and analyze sources. Indeed, this is the case for the search for strong gravitational lenses, where the population of the detectable lensed sources is only a very small fraction of the full source population. We apply for the first time a morphological classification method based on a Convolutional Neural Network (CNN) for recognizing strong gravitational lenses in $255$ square degrees of the Kilo Degree Survey (KiDS), one of the current-generation optical wide surveys. The CNN is currently optimized to recognize lenses with Einstein radii $gtrsim 1.4$ arcsec, about twice the $r$-band seeing in KiDS. In a sample of $21789$ colour-magnitude selected Luminous Red Galaxies (LRG), of which three are known lenses, the CNN retrieves 761 strong-lens candidates and correctly classifies two out of three of the known lenses. The misclassified lens has an Einstein radius below the range on which the algorithm is trained. We down-select the most reliable 56 candidates by a joint visual inspection. This final sample is presented and discussed. A conservative estimate based on our results shows that with our proposed method it should be possible to find $sim100$ massive LRG-galaxy lenses at $zlsim 0.4$ in KiDS when completed. In the most optimistic scenario this number can grow considerably (to maximally $sim$2400 lenses), when widening the colour-magnitude selection and training the CNN to recognize smaller image-separation lens systems.
We present a sample of 16 likely strong gravitational lenses identified in the VST Optical Imaging of the CDFS and ES1 fields (VOICE survey) using Convolutional Neural Networks (CNNs). We train two different CNNs on composite images produced by superimposing simulated gravitational arcs on real Luminous Red Galaxies observed in VOICE. Specifically, the first CNN is trained on single-band images and more easily identifies systems with large Einstein radii, while the second one, trained on composite RGB images, is more accurate in retrieving systems with smaller Einstein radii. We apply both networks to real data from the VOICE survey, taking advantage of the high limiting magnitude (26.1 in the r-band) and low PSF FWHM (0.8 in the r-band) of this deep survey. We analyse $sim21,200$ images with $mag_r<21.5$, identifying 257 lens candidates. To retrieve a high-confidence sample and to assess the accuracy of our technique, nine of the authors perform a visual inspection. Roughly 75% of the systems are classified as likely lenses by at least one of the authors. Finally, we assemble the LIVE sample (Lenses In VoicE) composed by the 16 systems passing the chosen grading threshold. Three of these candidates show likely lensing features when observed by the Hubble Space Telescope. This work represents a further confirmation of the ability of CNNs to inspect large samples of galaxies searching for gravitational lenses. These algorithms will be crucial to exploit the full scientific potential of forthcoming surveys with the Euclid satellite and the Vera Rubin Observatory
We present a systematic search for wide-separation (Einstein radius >1.5), galaxy-scale strong lenses in the 30 000 sq.deg of the Pan-STARRS 3pi survey on the Northern sky. With long time delays of a few days to weeks, such systems are particularly well suited for catching strongly lensed supernovae with spatially-resolved multiple images and open new perspectives on early-phase supernova spectroscopy and cosmography. We produce a set of realistic simulations by painting lensed COSMOS sources on Pan-STARRS image cutouts of lens luminous red galaxies with known redshift and velocity dispersion from SDSS. First of all, we compute the photometry of mock lenses in gri bands and apply a simple catalog-level neural network to identify a sample of 1050207 galaxies with similar colors and magnitudes as the mocks. Secondly, we train a convolutional neural network (CNN) on Pan-STARRS gri image cutouts to classify this sample and obtain sets of 105760 and 12382 lens candidates with scores pCNN>0.5 and >0.9, respectively. Extensive tests show that CNN performances rely heavily on the design of lens simulations and choice of negative examples for training, but little on the network architecture. Finally, we visually inspect all galaxies with pCNN>0.9 to assemble a final set of 330 high-quality newly-discovered lens candidates while recovering 23 published systems. For a subset, SDSS spectroscopy on the lens central regions proves our method correctly identifies lens LRGs at z~0.1-0.7. Five spectra also show robust signatures of high-redshift background sources and Pan-STARRS imaging confirms one of them as a quadruply-imaged red source at z_s = 1.185 strongly lensed by a foreground LRG at z_d = 0.3155. In the future, we expect that the efficient and automated two-step classification method presented in this paper will be applicable to the deeper gri stacks from the LSST with minor adjustments.
We perform a semi-automated search for strong gravitational lensing systems in the 9,000 deg$^2$ Dark Energy Camera Legacy Survey (DECaLS), part of the DESI Legacy Imaging Surveys (Dey et al.). The combination of the depth and breadth of these surveys are unparalleled at this time, making them particularly suitable for discovering new strong gravitational lensing systems. We adopt the deep residual neural network architecture (He et al.) developed by Lanusse et al. for the purpose of finding strong lenses in photometric surveys. We compile a training set that consists of known lensing systems in the Legacy Surveys and DES as well as non-lenses in the footprint of DECaLS. In this paper we show the results of applying our trained neural network to the cutout images centered on galaxies typed as ellipticals (Lang et al.) in DECaLS. The images that receive the highest scores (probabilities) are visually inspected and ranked. Here we present 335 candidate strong lensing systems, identified for the first time.