ترغب بنشر مسار تعليمي؟ اضغط هنا

Artificial Neural Network based gamma-hadron segregation methodology for TACTIC telescope

42   0   0.0 ( 0 )
 نشر من قبل Vir Dhar
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The sensitivity of a Cherenkov imaging telescope is strongly dependent on the rejection of the cosmic-ray background events. The methods which have been used to achieve the segregation between the gamma-rays from the source and the background cosmic-rays, include methods like Supercuts/Dynamic Supercuts, Maximum likelihood classifier, Kernel methods, Fractals, Wavelets and random forest. While the segregation potential of the neural network classifier has been investigated in the past with modest results, the main purpose of this paper is to study the gamma / hadron segregation potential of various ANN algorithms, some of which are supposed to be more powerful in terms of better convergence and lower error compared to the commonly used Backpropagation algorithm. The results obtained suggest that Levenberg-Marquardt method outperforms all other methods in the ANN domain. Applying this ANN algorithm to $sim$ 101.44 h of Crab Nebula data collected by the TACTIC telescope, during Nov. 10, 2005 - Jan. 30, 2006, yields an excess of $sim$ (1141$pm$106) with a statistical significance of $sim$ 11.07$sigma$, as against an excess of $sim$ (928$pm$100) with a statistical significance of $sim$ 9.40$sigma$ obtained with Dynamic Supercuts selection methodology. The main advantage accruing from the ANN methodology is that it is more effective at higher energies and this has allowed us to re-determine the Crab Nebula energy spectrum in the energy range $sim$ 1-24 TeV.

قيم البحث

اقرأ أيضاً

We apply a machine learning algorithm, the artificial neural network, to the search for gravitational-wave signals associated with short gamma-ray bursts. The multi-dimensional samples consisting of data corresponding to the statistical and physical quantities from the coherent search pipeline are fed into the artificial neural network to distinguish simulated gravitational-wave signals from background noise artifacts. Our result shows that the data classification efficiency at a fixed false alarm probability is improved by the artificial neural network in comparison to the conventional detection statistic. Therefore, this algorithm increases the distance at which a gravitational-wave signal could be observed in coincidence with a gamma-ray burst. In order to demonstrate the performance, we also evaluate a few seconds of gravitational-wave data segment using the trained networks and obtain the false alarm probability. We suggest that the artificial neural network can be a complementary method to the conventional detection statistic for identifying gravitational-wave signals related to the short gamma-ray bursts.
118 - C. K. Bhat 2012
A preliminary flux estimate of various cosmic-ray constituents based on the atmospheric Cerenkov light flux of extensive air showers using fractal and wavelet analysis approach is proposed. Using a Monte-Carlo simulated database of Cerenkov images re corded by the TACTIC telescope, we show that one of the wavelet parameters (wavelet dimension B6) provides ? 90% segregation of the simulated events in terms of the primary mass. We use these results to get a preliminary estimate of primary flux for various cosmic-ray primaries above 5 TeV energy. The simulation based flux estimates of the primary mass as recorded by the TACTIC telescope are in good agreement with the experimentally determined values.
The BL Lac object H1426+428 ($zequiv 0.129$) is an established source of TeV $gamma$-rays and detections of these photons from this object also have important implications for estimating the Extragalactic Background Light (EBL) in addition to the und erstanding of the particle acceleration and $gamma$-ray production mechanisms in the AGN jets. We have observed this source for about 244h in 2004, 2006 and 2007 with the TACTIC $gamma$-ray telescope located at Mt. Abu, India. Detailed analysis of these data do not indicate the presence of any statistically significant TeV $gamma$-ray signal from the source direction. Accordingly, we have placed an upper limit of $leq1.18times10^{-12}$ $photons$ $cm^{-2}$ $s^{-1}$ on the integrated $gamma$-ray flux at 3$sigma$ significance level.
The Cherenkov Telescope Array (CTA) will be the worlds leading ground-based gamma-ray observatory allowing us to study very high energy phenomena in the Universe. CTA will produce huge data sets, of the order of petabytes, and the challenge is to fin d better alternative data analysis methods to the already existing ones. Machine learning algorithms, like deep learning techniques, give encouraging results in this direction. In particular, convolutional neural network methods on images have proven to be effective in pattern recognition and produce data representations which can achieve satisfactory predictions. We test the use of convolutional neural networks to discriminate signal from background images with high rejections factors and to provide reconstruction parameters from gamma-ray events. The networks are trained and evaluated on artificial data sets of images. The results show that neural networks trained with simulated data can be useful to extract gamma-ray information. Such networks would help us to make the best use of large quantities of real data coming in the next decades.
321 - Kairan Sun , Xu Wei , Gengtao Jia 2015
Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundan t data aggravates the system workload. This project is mainly focused on the solution to the issues above, combining deep learning algorithm with cloud computing platform to deal with large-scale data. A MapReduce-based handwriting character recognizer will be designed in this project to verify the efficiency improvement this mechanism will achieve on training and practical large-scale data. Careful discussion and experiment will be developed to illustrate how deep learning algorithm works to train handwritten digits data, how MapReduce is implemented on deep learning neural network, and why this combination accelerates computation. Besides performance, the scalability and robustness will be mentioned in this report as well. Our system comes with two demonstration software that visually illustrates our handwritten digit recognition/encoding application.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا