No Arabic abstract
We consider a machine learning algorithm to detect and identify strong gravitational lenses on sky images. First, we simulate different artificial but very close to reality images of galaxies, stars and strong lenses, using six different methods, i.e. two for each class. Then we deploy a convolutional neural network architecture to classify these simulated images. We show that after neural network training process one achieves about 93 percent accuracy. As a simple test for the efficiency of the convolutional neural network, we apply it on an real Einstein cross image. Deployed neural network classifies it as gravitational lens, thus opening a way for variety of lens search applications of the deployed machine learning scheme.
We present an automated approach to detect and extract information from the astronomical datasets on the shapes of such objects as galaxies, star clusters and, especially, elongated ones such as the gravitational lenses. First, the Kolmogorov stochasticity parameter is used to retrieve the sub-regions that worth further attention. Then we turn to image processing and machine learning Principal Component Analysis algorithm to retrieve the sought objects and reveal the information on their morphologies. We show the capability of our automated method to identify distinct objects, including of and to classify them based on the input parameters. A catalog of possible lensing objects is retrieved as an output of the software, then their inspection is performed for the candidates that survive the filters applied.
The imminent advent of very large-scale optical sky surveys, such as Euclid and LSST, makes it important to find efficient ways of discovering rare objects such as strong gravitational lens systems, where a background object is multiply gravitationally imaged by a foreground mass. As well as finding the lens systems, it is important to reject false positives due to intrinsic structure in galaxies, and much work is in progress with machine learning algorithms such as neural networks in order to achieve both these aims. We present and discuss a Support Vector Machine (SVM) algorithm which makes use of a Gabor filterbank in order to provide learning criteria for separation of lenses and non-lenses, and demonstrate using blind challenges that under certain circumstances it is a particularly efficient algorithm for rejecting false positives. We compare the SVM engine with a large-scale human examination of 100000 simulated lenses in a challenge dataset, and also apply the SVM method to survey images from the Kilo-Degree Survey.
Many continuous gravitational wave searches are affected by instrumental spectral lines that could be confused with a continuous astrophysical signal. Several techniques have been developed to limit the effect of these lines by penalising signals that appear in only a single detector. We have developed a general method, using a convolutional neural network, to reduce the impact of instrumental artefacts on searches that use the SOAP algorithm. The method can identify features in corresponding frequency bands of each detector and classify these bands as containing a signal, an instrumental line, or noise. We tested the method against four different data-sets: Gaussian noise with time gaps, data from the final run of Initial LIGO (S6) with signals added, the reference S6 mock data challenge data set and signals injected into data from the second advanced LIGO observing run (O2). Using the S6 mock data challenge data set and at a 1% false alarm probability we showed that at 95% efficiency a fully-automated SOAP search has a sensitivity corresponding to a coherent signal-to-noise ratio of 110, equivalent to a sensitivity depth of 10 Hz$^{-1/2}$, making this automated search competitive with other searches requiring significantly more computing resources and human intervention.
By now, tens of gravitational-wave (GW) events have been detected by the LIGO and Virgo detectors. These GWs have all been emitted by compact binary coalescence, for which we have excellent predictive models. However, there might be other sources for which we do not have reliable models. Some are expected to exist but to be very rare (e.g., supernovae), while others may be totally unanticipated. So far, no unmodeled sources have been discovered, but the lack of models makes the search for such sources much more difficult and less sensitive. We present here a search for unmodeled GW signals using semi-supervised machine learning. We apply deep learning and outlier detection algorithms to labeled spectrograms of GW strain data, and then search for spectrograms with anomalous patterns in public LIGO data. We searched $sim 13%$ of the coincident data from the first two observing runs. No candidates of GW signals were detected in the data analyzed. We evaluate the sensitivity of the search using simulated signals, we show that this search can detect spectrograms containing unusual or unexpected GW patterns, and we report the waveforms and amplitudes for which a $50%$ detection rate is achieved.
Photometric variability detection is often considered as a hypothesis testing problem: an object is variable if the null-hypothesis that its brightness is constant can be ruled out given the measurements and their uncertainties. Uncorrected systematic errors limit the practical applicability of this approach to high-amplitude variability and well-behaving data sets. Searching for a new variability detection technique that would be applicable to a wide range of variability types while being robust to outliers and underestimated measurement uncertainties, we propose to consider variability detection as a classification problem that can be approached with machine learning. We compare several classification algorithms: Logistic Regression (LR), Support Vector Machines (SVM), k-Nearest Neighbors (kNN) Neural Nets (NN), Random Forests (RF) and Stochastic Gradient Boosting classifier (SGB) applied to 18 features (variability indices) quantifying scatter and/or correlation between points in a light curve. We use a subset of OGLE-II Large Magellanic Cloud (LMC) photometry (30265 light curves) that was searched for variability using traditional methods (168 known variable objects identified) as the training set and then apply the NN to a new test set of 31798 OGLE-II LMC light curves. Among 205 candidates selected in the test set, 178 are real variables, 13 low-amplitude variables are new discoveries. We find that the considered machine learning classifiers are more efficient (they find more variables and less false candidates) compared to traditional techniques that consider individual variability indices or their linear combination. The NN, SGB, SVM and RF show a higher efficiency compared to LR and kNN.