ترغب بنشر مسار تعليمي؟ اضغط هنا

A Machine Learning Approach to the Detection of Ghosting and Scattered Light Artifacts in Dark Energy Survey Images

110   0   0.0 ( 0 )
 نشر من قبل Michael H. L. S. Wang
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Astronomical images are often plagued by unwanted artifacts that arise from a number of sources including imperfect optics, faulty image sensors, cosmic ray hits, and even airplanes and artificial satellites. Spurious reflections (known as ghosts) and the scattering of light off the surfaces of a camera and/or telescope are particularly difficult to avoid. Detecting ghosts and scattered light efficiently in large cosmological surveys that will acquire petabytes of data can be a daunting task. In this paper, we use data from the Dark Energy Survey to develop, train, and validate a machine learning model to detect ghosts and scattered light using convolutional neural networks. The model architecture and training procedure is discussed in detail, and the performance on the training and validation set is presented. Testing is performed on data and results are compared with those from a ray-tracing algorithm. As a proof of principle, we have shown that our method is promising for the Rubin Observatory and beyond.

قيم البحث

اقرأ أيضاً

Astronomical images from optical photometric surveys are typically contaminated with transient artifacts such as cosmic rays, satellite trails and scattered light. We have developed and tested an algorithm that removes these artifacts using a deep, a rtifact free, static sky coadd image built up through the median combination of point spread function (PSF) homogenized, overlapping single epoch images. Transient artifacts are detected and masked in each single epoch image through comparison with an artifact free, PSF-matched simulated image that is constructed using the PSF-corrected, model fitting catalog from the artifact free coadd image together with the position variable PSF model of the single epoch image. This approach works well not only for cleaning single epoch images with worse seeing than the PSF homogenized coadd, but also the traditionally much more challenging problem of cleaning single epoch images with better seeing. In addition to masking transient artifacts, we have developed an interpolation approach that uses the local PSF and performs well in removing artifacts whose widths are smaller than the PSF full width at half maximum, including cosmic rays, the peaks of saturated stars and bleed trails. We have tested this algorithm on Dark Energy Survey Science Verification data and present performance metrics. More generally, our algorithm can be applied to any survey which images the same part of the sky multiple times.
We show that multiple machine learning algorithms can match human performance in classifying transient imaging data from the Sloan Digital Sky Survey (SDSS) supernova survey into real objects and artefacts. This is a first step in any transient scien ce pipeline and is currently still done by humans, but future surveys such as the Large Synoptic Survey Telescope (LSST) will necessitate fully machine-enabled solutions. Using features trained from eigenimage analysis (principal component analysis, PCA) of single-epoch g, r and i-difference images, we can reach a completeness (recall) of 96 per cent, while only incorrectly classifying at most 18 per cent of artefacts as real objects, corresponding to a precision (purity) of 84 per cent. In general, random forests performed best, followed by the k-nearest neighbour and the SkyNet artificial neural net algorithms, compared to other methods such as naive Bayes and kernel support vector machine. Our results show that PCA-based machine learning can match human success levels and can naturally be extended by including multiple epochs of data, transient colours and host galaxy information which should allow for significant further improvements, especially at low signal-to-noise.
89 - K. Eckert 2020
For ground-based optical imaging with current CCD technology, the Poisson fluctuations in source and sky background photon arrivals dominate the noise budget and are readily estimated. Another component of noise, however, is the signal from the undet ected population of stars and galaxies. Using injection of artificial galaxies into images, we demonstrate that the measured variance of galaxy moments (used for weak gravitational lensing measurements) in Dark Energy Survey (DES) images is significantly in excess of the Poisson predictions, by up to 30%, and that the background sky levels are overestimated by current software. By cross-correlating distinct images of empty sky regions, we establish that there is a significant image noise contribution from undetected static sources (US), which on average are mildly resolved at DES resolution. Treating these US as a stationary noise source, we compute a correction to the moment covariance matrix expected from Poisson noise. The corrected covariance matrix matches the moment variances measured on the injected DES images to within 5%. Thus we have an empirical method to statistically account for US in weak lensing measurements, rather than requiring extremely deep sky simulations. We also find that local sky determinations can remove the bias in flux measurements, at a small penalty in additional, but quantifiable, noise.
The sensitivity of searches for astrophysical transients in data from the LIGO is generally limited by the presence of transient, non-Gaussian noise artifacts, which occur at a high-enough rate such that accidental coincidence across multiple detecto rs is non-negligible. Furthermore, non-Gaussian noise artifacts typically dominate over the background contributed from stationary noise. These glitches can easily be confused for transient gravitational-wave signals, and their robust identification and removal will help any search for astrophysical gravitational-waves. We apply Machine Learning Algorithms (MLAs) to the problem, using data from auxiliary channels within the LIGO detectors that monitor degrees of freedom unaffected by astrophysical signals. The number of auxiliary-channel parameters describing these disturbances may also be extremely large; an area where MLAs are particularly well-suited. We demonstrate the feasibility and applicability of three very different MLAs: Artificial Neural Networks, Support Vector Machines, and Random Forests. These classifiers identify and remove a substantial fraction of the glitches present in two very different data sets: four weeks of LIGOs fourth science run and one week of LIGOs sixth science run. We observe that all three algorithms agree on which events are glitches to within 10% for the sixth science run data, and support this by showing that the different optimization criteria used by each classifier generate the same decision surface, based on a likelihood-ratio statistic. Furthermore, we find that all classifiers obtain similar limiting performance, suggesting that most of the useful information currently contained in the auxiliary channel parameters we extract is already being used.
We present a machine learning (ML) based method for automated detection of Gamma-Ray Burst (GRB) candidate events in the range 60 keV - 250 keV from the AstroSat Cadmium Zinc Telluride Imager data. We use density-based spatial clustering to detect ex cess power and carry out an unsupervised hierarchical clustering across all such events to identify the different light curves present in the data. This representation helps understand the instruments sensitivity to the various GRB populations and identify the major non-astrophysical noise artefacts present in the data. We use Dynamic Time Warping (DTW) to carry out template matching, which ensures the morphological similarity of the detected events with known typical GRB light curves. DTW alleviates the need for a dense template repository often required in matched filtering like searches. The use of a similarity metric facilitates outlier detection suitable for capturing previously unmodelled events. We briefly discuss the characteristics of 35 long GRB candidates detected using the pipeline and show that with minor modifications such as adaptive binning, the method is also sensitive to short GRB events. Augmenting the existing data analysis pipeline with such ML capabilities alleviates the need for extensive manual inspection, enabling quicker response to alerts received from other observatories such as the gravitational-wave detectors.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا