No Arabic abstract
We present an algorithm using Principal Component Analysis (PCA) to subtract galaxies from imaging data, and also two algorithms to find strong, galaxy-scale gravitational lenses in the resulting residual image. The combined method is optimized to find full or partial Einstein rings. Starting from a pre-selection of potential massive galaxies, we first perform a PCA to build a set of basis vectors. The galaxy images are reconstructed using the PCA basis and subtracted from the data. We then filter the residual image with two different methods. The first uses a curvelet (curved wavelets) filter of the residual images to enhance any curved/ring feature. The resulting image is transformed in polar coordinates, centered on the lens galaxy center. In these coordinates, a ring is turned into a line, allowing us to detect very faint rings by taking advantage of the integrated signal-to-noise in the ring (a line in polar coordinates). The second way of analysing the PCA-subtracted images identifies structures in the residual images and assesses whether they are lensed images according to their orientation, multiplicity and elongation. We apply the two methods to a sample of simulated Einstein rings, as they would be observed with the ESA Euclid satellite in the VIS band. The polar coordinates transform allows us to reach a completeness of 90% and a purity of 86%, as soon as the signal-to-noise integrated in the ring is higher than 30, and almost independent of the size of the Einstein ring. Finally, we show with real data that our PCA-based galaxy subtraction scheme performs better than traditional subtraction based on model fitting to the data. Our algorithm can be developed and improved further using machine learning and dictionary learning methods, which would extend the capabilities of the method to more complex and diverse galaxy shapes.
We present the results of a new search for galaxy-scale strong lensing systems in CFHTLS Wide. Our lens-finding technique involves a preselection of potential lens galaxies, applying simple cuts in size and magnitude. We then perform a Principal Component Analysis of the galaxy images, ensuring a clean removal of the light profile. Lensed features are searched for in the residual images using the clustering topometric algorithm DBSCAN. We find 1098 lens candidates that we inspect visually, leading to a cleaned sample of 109 new lens candidates. Using realistic image simulations we estimate the completeness of our sample and show that it is independent of source surface brightness, Einstein ring size (image separation) or lens redshift. We compare the properties of our sample to previous lens searches in CFHTLS. Including the present search, the total number of lenses found in CFHTLS amounts to 678, which corresponds to ~4 lenses per square degree down to i=24.8. This is equivalent to ~ 60.000 lenses in total in a survey as wide as Euclid, but at the CFHTLS resolution and depth.
We present a systematic search for wide-separation (Einstein radius >1.5), galaxy-scale strong lenses in the 30 000 sq.deg of the Pan-STARRS 3pi survey on the Northern sky. With long time delays of a few days to weeks, such systems are particularly well suited for catching strongly lensed supernovae with spatially-resolved multiple images and open new perspectives on early-phase supernova spectroscopy and cosmography. We produce a set of realistic simulations by painting lensed COSMOS sources on Pan-STARRS image cutouts of lens luminous red galaxies with known redshift and velocity dispersion from SDSS. First of all, we compute the photometry of mock lenses in gri bands and apply a simple catalog-level neural network to identify a sample of 1050207 galaxies with similar colors and magnitudes as the mocks. Secondly, we train a convolutional neural network (CNN) on Pan-STARRS gri image cutouts to classify this sample and obtain sets of 105760 and 12382 lens candidates with scores pCNN>0.5 and >0.9, respectively. Extensive tests show that CNN performances rely heavily on the design of lens simulations and choice of negative examples for training, but little on the network architecture. Finally, we visually inspect all galaxies with pCNN>0.9 to assemble a final set of 330 high-quality newly-discovered lens candidates while recovering 23 published systems. For a subset, SDSS spectroscopy on the lens central regions proves our method correctly identifies lens LRGs at z~0.1-0.7. Five spectra also show robust signatures of high-redshift background sources and Pan-STARRS imaging confirms one of them as a quadruply-imaged red source at z_s = 1.185 strongly lensed by a foreground LRG at z_d = 0.3155. In the future, we expect that the efficient and automated two-step classification method presented in this paper will be applicable to the deeper gri stacks from the LSST with minor adjustments.
We present the first galaxy scale lens catalog from the second Red-Sequence Cluster Survey. The catalog contains 60 lensing system candidates comprised of Luminous Red Galaxy (LRG) lenses at 0.2 < z < 0.5 surrounded by blue arcs or apparent multiple images of background sources. The catalog is a valuable complement to previous galaxy-galaxy lens catalogs as it samples an intermediate lens redshift range and is composed of bright sources and lenses that allow easy follow-up for detailed analysis. Mass and mass-to-light ratio estimates reveal that the lens galaxies are massive (<M>~5.5x10e11 M_sun/h) and rich in dark matter (<M/L>~14 M_sun/L_sun,B*h). Even though a slight increasing trend in the mass-to-light ratio is observed from z=0.2 to z=0.5, current redshift and light profile measurements do not allow stringent constraints on the mass-to-light ratio evolution of LRGs.
We present a spectroscopic survey for strong galaxy-galaxy lenses. Exploiting optimal sight-lines to massive, bulge-dominated galaxies at redshifts $z sim 0.4$ with wide-field, multifibre spectroscopy, we anticipate the detection of 10-20 lensed Lyman-$alpha$ emitting galaxies at redshifts $z simgreat 3$ from a sample of 2000 deflectors. Initial spectroscopic observations are described and the prospects for constraining the emission-line luminosity function of the Lyman-$alpha$ emitting population are outlined.
In this paper we develop a new unsupervised machine learning technique comprised of a feature extractor, a convolutional autoencoder (CAE), and a clustering algorithm consisting of a Bayesian Gaussian mixture model (BGM). We apply this technique to visual band space-based simulated imaging data from the Euclid Space Telescope using data from the Strong Gravitational Lenses Finding Challenge. Our technique promisingly captures a variety of lensing features such as Einstein rings with different radii, distorted arc structures, etc, without using predefined labels. After the clustering process, we obtain several classification clusters separated by different visual features which are seen in the images. Our method successfully picks up $sim$63 percent of lensing images from all lenses in the training set. With the assumed probability proposed in this study, this technique reaches an accuracy of $77.25pm 0.48$% in binary classification using the training set. Additionally, our unsupervised clustering process can be used as the preliminary classification for future surveys of lenses to efficiently select targets and to speed up the labelling process. As the starting point of the astronomical application using this technique, we not only explore the application to gravitationally lensed systems, but also discuss the limitations and potential future uses of this technique.