ترغب بنشر مسار تعليمي؟ اضغط هنا

Strong lens systems search in the Dark Energy Survey using Convolutional Neural Networks

109   0   0.0 ( 0 )
 نشر من قبل Karina Rojas
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We performed a search for strong lens galaxy-scale systems in the first data release of the Dark Energy Survey (DES), from a color-selected parent sample of 18~745~029 Luminous Red Galaxies (LRGs). Our search was based on a Convolutional Neural Network (CNN) to grade our LRG selection with values between 0 (non-lens) and 1 (lens). Our training set was data-driven, i.e. using lensed sources taken from HST COSMOS images and where the light distribution of the lens plane was taken directly from DES images of our LRGs. A total of 76~582 cutouts obtained a score above 0.9. These were visually inspected and resulted in two catalogs. The first one contains 405 lens candidates, where 90 present clear lensing features and counterparts, while the others 315 require more evidence, such as higher resolution images or spectra to be conclusive. A total of 186 candidates were totally new identified in this search. The second catalog includes 539 ring galaxy candidates that will be useful to train CNNs against this type of false positives. For the 90 best lens candidates we carried out color-based deblending of the lens and source light without fitting any analytical profile to the data. The method turned out to be very efficient in the deblending, even for very compact objects and for objects with very complex morphology. Finally, from the 90 best lens candidates we selected 52 systems having one single deflector, to test an automated modeling pipeline which successfully modeled 79% of the sample within an acceptable amount of computing time.

قيم البحث

اقرأ أيضاً

Future large-scale surveys with high resolution imaging will provide us with a few $10^5$ new strong galaxy-scale lenses. These strong lensing systems however will be contained in large data amounts which are beyond the capacity of human experts to v isually classify in a unbiased way. We present a new strong gravitational lens finder based on convolutional neural networks (CNNs). The method was applied to the Strong Lensing challenge organised by the Bologna Lens Factory. It achieved first and third place respectively on the space-based data-set and the ground-based data-set. The goal was to find a fully automated lens finder for ground-based and space-based surveys which minimizes human inspect. We compare the results of our CNN architecture and three new variations (invariant views and residual) on the simulated data of the challenge. Each method has been trained separately 5 times on 17 000 simulated images, cross-validated using 3 000 images and then applied to a 100 000 image test set. We used two different metrics for evaluation, the area under the receiver operating characteristic curve (AUC) score and the recall with no false positive ($mathrm{Recall}_{mathrm{0FP}}$). For ground based data our best method achieved an AUC score of $0.977$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.50$. For space-based data our best method achieved an AUC score of $0.940$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.32$. On space-based data adding dihedral invariance to the CNN architecture diminished the overall score but achieved a higher no contamination recall. We found that using committees of 5 CNNs produce the best recall at zero contamination and consistenly score better AUC than a single CNN. We found that for every variation of our CNN lensfinder, we achieve AUC scores close to $1$ within $6%$.
We present a systematic search for wide-separation (Einstein radius >1.5), galaxy-scale strong lenses in the 30 000 sq.deg of the Pan-STARRS 3pi survey on the Northern sky. With long time delays of a few days to weeks, such systems are particularly w ell suited for catching strongly lensed supernovae with spatially-resolved multiple images and open new perspectives on early-phase supernova spectroscopy and cosmography. We produce a set of realistic simulations by painting lensed COSMOS sources on Pan-STARRS image cutouts of lens luminous red galaxies with known redshift and velocity dispersion from SDSS. First of all, we compute the photometry of mock lenses in gri bands and apply a simple catalog-level neural network to identify a sample of 1050207 galaxies with similar colors and magnitudes as the mocks. Secondly, we train a convolutional neural network (CNN) on Pan-STARRS gri image cutouts to classify this sample and obtain sets of 105760 and 12382 lens candidates with scores pCNN>0.5 and >0.9, respectively. Extensive tests show that CNN performances rely heavily on the design of lens simulations and choice of negative examples for training, but little on the network architecture. Finally, we visually inspect all galaxies with pCNN>0.9 to assemble a final set of 330 high-quality newly-discovered lens candidates while recovering 23 published systems. For a subset, SDSS spectroscopy on the lens central regions proves our method correctly identifies lens LRGs at z~0.1-0.7. Five spectra also show robust signatures of high-redshift background sources and Pan-STARRS imaging confirms one of them as a quadruply-imaged red source at z_s = 1.185 strongly lensed by a foreground LRG at z_d = 0.3155. In the future, we expect that the efficient and automated two-step classification method presented in this paper will be applicable to the deeper gri stacks from the LSST with minor adjustments.
We search Dark Energy Survey (DES) Year 3 imaging data for galaxy-galaxy strong gravitational lenses using convolutional neural networks. We generate 250,000 simulated lenses at redshifts > 0.8 from which we create a data set for training the neural networks with realistic seeing, sky and shot noise. Using the simulations as a guide, we build a catalogue of 1.1 million DES sources with 1.8 < g - i < 5, 0.6 < g -r < 3, r_mag > 19, g_mag > 20 and i_mag > 18.2. We train two ensembles of neural networks on training sets consisting of simulated lenses, simulated non-lenses, and real sources. We use the neural networks to score images of each of the sources in our catalogue with a value from 0 to 1, and select those with scores greater than a chosen threshold for visual inspection, resulting in a candidate set of 7,301 galaxies. During visual inspection we rate 84 as probably or definitely lenses. Four of these are previously known lenses or lens candidates. We inspect a further 9,428 candidates with a different score threshold, and identify four new candidates. We present 84 new strong lens candidates, selected after a few hours of visual inspection by astronomers. This catalogue contains a comparable number of high-redshift lenses to that predicted by simulations. Based on simulations we estimate our sample to contain most discoverable lenses in this imaging and at this redshift range.
We present a new sample of galaxy-scale strong gravitational-lens candidates, selected from 904 square degrees of Data Release 4 of the Kilo-Degree Survey (KiDS), i.e., the Lenses in the Kilo-Degree Survey (LinKS) sample. We apply two Convolutional N eural Networks (ConvNets) to $sim88,000$ colour-magnitude selected luminous red galaxies yielding a list of 3500 strong-lens candidates. This list is further down-selected via human inspection. The resulting LinKS sample is composed of 1983 rank-ordered targets classified as potential lens candidates by at least one inspector. Of these, a high-grade subsample of 89 targets is identified with potential strong lenses by all inspectors. Additionally, we present a collection of another 200 strong lens candidates discovered serendipitously from various previous ConvNet runs. A straightforward application of our procedure to future Euclid or LSST data can select a sample of $sim3000$ lens candidates with less than 10 per cent expected false positives and requiring minimal human intervention.
We present a sample of 16 likely strong gravitational lenses identified in the VST Optical Imaging of the CDFS and ES1 fields (VOICE survey) using Convolutional Neural Networks (CNNs). We train two different CNNs on composite images produced by super imposing simulated gravitational arcs on real Luminous Red Galaxies observed in VOICE. Specifically, the first CNN is trained on single-band images and more easily identifies systems with large Einstein radii, while the second one, trained on composite RGB images, is more accurate in retrieving systems with smaller Einstein radii. We apply both networks to real data from the VOICE survey, taking advantage of the high limiting magnitude (26.1 in the r-band) and low PSF FWHM (0.8 in the r-band) of this deep survey. We analyse $sim21,200$ images with $mag_r<21.5$, identifying 257 lens candidates. To retrieve a high-confidence sample and to assess the accuracy of our technique, nine of the authors perform a visual inspection. Roughly 75% of the systems are classified as likely lenses by at least one of the authors. Finally, we assemble the LIVE sample (Lenses In VoicE) composed by the 16 systems passing the chosen grading threshold. Three of these candidates show likely lensing features when observed by the Hubble Space Telescope. This work represents a further confirmation of the ability of CNNs to inspect large samples of galaxies searching for gravitational lenses. These algorithms will be crucial to exploit the full scientific potential of forthcoming surveys with the Euclid satellite and the Vera Rubin Observatory
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا