Do you want to publish a course? Click here

Studying deep convolutional neural networks with hexagonal lattices for imaging atmospheric Cherenkov telescope event reconstruction

59   0   0.0 ( 0 )
 Added by Daniel Nieto
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Deep convolutional neural networks (DCNs) are a promising machine learning technique to reconstruct events recorded by imaging atmospheric Cherenkov telescopes (IACTs), but require optimization to reach full performance. One of the most pressing challenges is processing raw images captured by cameras made of hexagonal lattices of photo-multipliers, a common layout among IACT cameras which topologically differs from the square lattices conventionally expected, as their input data, by DCN models. Strategies directed to tackle this challenge range from the conversion of the hexagonal lattices onto square lattices by means of oversampling or interpolation to the implementation of hexagonal convolutional kernels. In this contribution we present a comparison of several of those strategies, using DCN models trained on simulated IACT data.



rate research

Read More

The KM3NeT research infrastructure is currently under construction at two locations in the Mediterranean Sea. The KM3NeT/ORCA water-Cherenkov neutrino detector off the French coast will instrument several megatons of seawater with photosensors. Its main objective is the determination of the neutrino mass ordering. This work aims at demonstrating the general applicability of deep convolutional neural networks to neutrino telescopes, using simulated datasets for the KM3NeT/ORCA detector as an example. To this end, the networks are employed to achieve reconstruction and classification tasks that constitute an alternative to the analysis pipeline presented for KM3NeT/ORCA in the KM3NeT Letter of Intent. They are used to infer event reconstruction estimates for the energy, the direction, and the interaction point of incident neutrinos. The spatial distribution of Cherenkov light generated by charged particles induced in neutrino interactions is classified as shower- or track-like, and the main background processes associated with the detection of atmospheric neutrinos are recognized. Performance comparisons to machine-learning classification and maximum-likelihood reconstruction algorithms previously developed for KM3NeT/ORCA are provided. It is shown that this application of deep convolutional neural networks to simulated datasets for a large-volume neutrino telescope yields competitive reconstruction results and performance improvements with respect to classical approaches.
66 - R.D. Parsons , S. Ohm 2019
In this work, we present a new, high performance algorithm for background rejection in imaging atmospheric Cherenkov telescopes. We build on the already popular machine-learning techniques used in gamma-ray astronomy by the application of the latest techniques in machine learning, namely recurrent and convolutional neural networks, to the background rejection problem. Use of these machine-learning techniques addresses some of the key challenges encountered in the currently implemented algorithms and helps to significantly increase the background rejection performance at all energies. We apply these machine learning techniques to the H.E.S.S. telescope array, first testing their performance on simulated data and then applying the analysis to two well known gamma-ray sources. With real observational data we find significantly improved performance over the current standard methods, with a 20-25% reduction in the background rate when applying the recurrent neural network analysis. Importantly, we also find that the convolutional neural network results are strongly dependent on the sky brightness in the source region which has important implications for the future implementation of this method in Cherenkov telescope analysis.
We present a sophisticated gamma-ray likelihood reconstruction technique for Imaging Atmospheric Cerenkov Telescopes. The technique is based on the comparison of the raw Cherenkov camera pixel images of a photon induced atmospheric particle shower with the predictions from a semi-analytical model. The approach was initiated by the CAT experiment in the 1990s, and has been further developed by a new fit algorithm based on a log-likelihood minimisation using all pixels in the camera, a precise treatment of night sky background noise, the use of stereoscopy and the introduction of first interaction depth as parameter of the model. The reconstruction technique provides a more precise direction and energy reconstruction of the photon induced shower compared to other techniques in use, together with a better gamma efficiency, especially at low energies, as well as an improved background rejection. For data taken with the H.E.S.S. experiment, the reconstruction technique yielded a factor of ~2 better sensitivity compared to the H.E.S.S. standard reconstruction techniques based on second moments of the camera images (Hillas Parameter technique).
Fourier-based wavefront sensors, such as the Pyramid Wavefront Sensor (PWFS), are the current preference for high contrast imaging due to their high sensitivity. However, these wavefront sensors have intrinsic nonlinearities that constrain the range where conventional linear reconstruction methods can be used to accurately estimate the incoming wavefront aberrations. We propose to use Convolutional Neural Networks (CNNs) for the nonlinear reconstruction of the wavefront sensor measurements. It is demonstrated that a CNN can be used to accurately reconstruct the nonlinearities in both simulations and a lab implementation. We show that solely using a CNN for the reconstruction leads to suboptimal closed loop performance under simulated atmospheric turbulence. However, it is demonstrated that using a CNN to estimate the nonlinear error term on top of a linear model results in an improved effective dynamic range of a simulated adaptive optics system. The larger effective dynamic range results in a higher Strehl ratio under conditions where the nonlinear error is relevant. This will allow the current and future generation of large astronomical telescopes to work in a wider range of atmospheric conditions and therefore reduce costly downtime of such facilities.
Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics have become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires multi-epoch observations and complex luminance measurements. In this work, we present a novel method for classifying SNeIa simply from single-epoch observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Our method first builds a convolutional neural network for estimating the luminance of supernovae from telescope images, and then constructs another neural network for the classification, where the estimated luminance and observation dates are used as features for classification. Both of the neural networks are integrated into a single deep neural network to classify SNeIa directly from observation images. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with multi-epoch observations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا