ترغب بنشر مسار تعليمي؟ اضغط هنا

Classification of bad pixels of the Hawaii-2RG detector of the ASTROnomical NearInfraRed CAMera

63   0   0.0 ( 0 )
 نشر من قبل Nicolai Shatsky
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

ASTRONIRCAM is an infrared camera-spectrograph installed at the 2.5-meter telescope of the CMO SAI. The instrument is equipped with the HAWAII-2RG array. A bad pixels classification of the ASTRONIRCAM detector is proposed. The classification is based on histograms of the difference of consecutive non-destructive readouts of a flat field. Bad pixels are classified into 5 groups: hot (saturated on the first readout), warm (the signal accumulation rate is above the mean value by more than 5 standard deviations), cold (the rate is under the mean value by more than 5 standard deviations), dead (no signal accumulation), and inverse (having a negative signal accumulation in the first readouts). Normal pixels of the ASTRONIRCAM detector account for 99.6% of the total. We investigated the dependence between the amount of bad pixels and the number of cooldown cycles of the instrument. While hot pixels remain the same, the bad pixels of other types may migrate between groups. The number of pixels in each group stays roughly constant. We found that the mean and variance of the bad pixels amount in each group and the transitions between groups do not differ noticeably between normal or slow cooldowns.



قيم البحث

اقرأ أيضاً

A fraction of the XMM-Newton/EPIC FOV is obscured by the dysfunctional (i.e. bad) pixels. The fraction varies between different EPIC instruments in a given observation. These complications affect the analysis of extended X-ray sources observed with X MM-Newton/EPIC and the consequent scientific interpretation of the results. For example, the accuracy of the widely used cosmological probe of the gas mass of clusters of galaxies depends on the accuracy of the procedure of removing the obscuration effect from the measured flux. The Science Analysis Software (SAS) includes an option for recovering the lost fraction of the flux measured by a primary instrument by utilising a supplementary image of the same source. The correction may be accurate if the supplementary image is minimally obscured at the locations of the bad pixels of the primary instrument. This can be achieved e.g. by using the observation-based MOS2 image for correcting the pn flux, or by using a synthetic model image. By utilising a sample of 27 galaxy cluster observations we evaluated the accuracy of the recovery method based on observed images, as implemented in SAS 18.0.0. We found that the accuracy of the recovered total flux in the 0.5-7.0 keV band in the full geometric area within the central r = 6 arcmin is better than 0.1% on average while in some individual cases the recovered flux may be uncertain by ~1%.
We present a new method of interpolation for the pixel brightness estimation in astronomical images. Our new method is simple and easily implementable. We show the comparison of this method with the widely used linear interpolation and other interpol ation algorithms using one thousand astronomical images obtained from the Sloan Digital Sky Survey. The comparison shows that our method improves bad pixels brightness estimation with four times lower mean error than the presently most popular linear interpolation and has a better performance than any other examined method. The presented idea is flexible and can be also applied to presently used and future interpolation methods. The proposed method is especially useful for large sky surveys image reduction but can be also applied to single image correction.
In the new era of very large telescopes, where data is crucial to expand scientific knowledge, we have witnessed many deep learning applications for the automatic classification of lightcurves. Recurrent neural networks (RNNs) are one of the models u sed for these applications, and the LSTM unit stands out for being an excellent choice for the representation of long time series. In general, RNNs assume observations at discrete times, which may not suit the irregular sampling of lightcurves. A traditional technique to address irregular sequences consists of adding the sampling time to the networks input, but this is not guaranteed to capture sampling irregularities during training. Alternatively, the Phased LSTM unit has been created to address this problem by updating its state using the sampling times explicitly. In this work, we study the effectiveness of the LSTM and Phased LSTM based architectures for the classification of astronomical lightcurves. We use seven catalogs containing periodic and nonperiodic astronomical objects. Our findings show that LSTM outperformed PLSTM on 6/7 datasets. However, the combination of both units enhances the results in all datasets.
Next-generation surveys like the Legacy Survey of Space and Time (LSST) on the Vera C. Rubin Observatory will generate orders of magnitude more discoveries of transients and variable stars than previous surveys. To prepare for this data deluge, we de veloped the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC), a competition which aimed to catalyze the development of robust classifiers under LSST-like conditions of a non-representative training set for a large photometric test set of imbalanced classes. Over 1,000 teams participated in PLAsTiCC, which was hosted in the Kaggle data science competition platform between Sep 28, 2018 and Dec 17, 2018, ultimately identifying three winners in February 2019. Participants produced classifiers employing a diverse set of machine learning techniques including hybrid combinations and ensemble averages of a range of approaches, among them boosted decision trees, neural networks, and multi-layer perceptrons. The strong performance of the top three classifiers on Type Ia supernovae and kilonovae represent a major improvement over the current state-of-the-art within astronomy. This paper summarizes the most promising methods and evaluates their results in detail, highlighting future directions both for classifier development and simulation needs for a next generation PLAsTiCC data set.
The exploitation of present and future synoptic (multi-band and multi-epoch) surveys requires an extensive use of automatic methods for data processing and data interpretation. In this work, using data extracted from the Catalina Real Time Transient Survey (CRTS), we investigate the classification performance of some well tested methods: Random Forest, MLPQNA (Multi Layer Perceptron with Quasi Newton Algorithm) and K-Nearest Neighbors, paying special attention to the feature selection phase. In order to do so, several classification experiments were performed. Namely: identification of cataclysmic variables, separation between galactic and extra-galactic objects and identification of supernovae.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا