ترغب بنشر مسار تعليمي؟ اضغط هنا

Transfer Learning in Automated Gamma Spectral Identification

120   0   0.0 ( 0 )
 نشر من قبل Eric Moore
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The models and weights of prior trained Convolutional Neural Networks (CNN) created to perform automated isotopic classification of time-sequenced gamma-ray spectra, were utilized to provide source domain knowledge as training on new domains of potential interest. The previous results were achieved solely using modeled spectral data. In this work we attempt to transfer the knowledge gained to the new, if similar, domain of solely measured data. The ability to train on modeled data and predict on measured data will be crucial in any successful data-driven approach to this problem space.

قيم البحث

اقرأ أيضاً

Pions constitute nearly $70%$ of final state particles in ultra high energy collisions. They act as a probe to understand the statistical properties of Quantum Chromodynamics (QCD) matter i.e. Quark Gluon Plasma (QGP) created in such relativistic hea vy ion collisions (HIC). Apart from this, direct photons are the most versatile tools to study relativistic HIC. They are produced, by various mechanisms, during the entire space-time history of the strongly interacting system. Direct photons provide measure of jet-quenching when compared with other quark or gluon jets. The $pi^{0}$ decay into two photons make the identification of non-correlated gamma coming from another process cumbersome in the Electromagnetic Calorimeter. We investigate the use of deep learning architecture for reconstruction and identification of single as well as multi particles showers produced in calorimeter by particles created in high energy collisions. We utilize the data of electromagnetic shower at calorimeter cell-level to train the network and show improvements for identification and characterization. These networks are fast and computationally inexpensive for particle shower identification and reconstruction for current and future experiments at particle colliders.
Rapidly applying the effects of detector response to physics objects (e.g. electrons, muons, showers of particles) is essential in high energy physics. Currently available tools for the transformation from truth-level physics objects to reconstructed detector-level physics objects involve manually defining resolution functions. These resolution functions are typically derived in bins of variables that are correlated with the resolution (e.g. pseudorapidity and transverse momentum). This process is time consuming, requires manual updates when detector conditions change, and can miss important correlations. Machine learning offers a way to automate the process of building these truth-to-reconstructed object transformations and can capture complex correlation for any given set of input variables. Such machine learning algorithms, with sufficient optimization, could have a wide range of applications: improving phenomenological studies by using a better detector representation, allowing for more efficient production of Geant4 simulation by only simulating events within an interesting part of phase space, and studies on future experimental sensitivity to new physics.
93 - David Le , Minhaj Alam , Cham Yao 2019
Purpose: To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy (DR). Methods: A deep learning convolutional neural network (CNN) architecture VGG16 was employed for this s tudy. A transfer learning process was implemented to re-train the CNN for robust OCTA classification. In order to demonstrate the feasibility of using this method for artificial intelligence (AI) screening of DR in clinical environments, the re-trained CNN was incorporated into a custom developed GUI platform which can be readily operated by ophthalmic personnel. Results: With last nine layers re-trained, CNN architecture achieved the best performance for automated OCTA classification. The overall accuracy of the re-trained classifier for differentiating healthy, NoDR, and NPDR was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR and DR were 0.97, 0.98 and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusion: With a transfer leaning process to adopt the early layers for simple feature analysis and to re-train the upper layers for fine feature analysis, the CNN architecture VGG16 can be used for robust OCTA classification of healthy, NoDR, and NPDR eyes. Translational Relevance: OCTA can capture microvascular changes in early DR. A transfer learning process enables robust implementation of convolutional neural network (CNN) for automated OCTA classification of DR.
Gaussian process tomography (GPT) is a method used for obtaining real-time tomographic reconstructions of the plasma emissivity profile in a tokamak, given some model for the underlying physical processes involved. GPT can also be used, thanks to Bay esian formalism, to perform model selection -- i.e., comparing different models and choosing the one with maximum evidence. However, the computations involved in this particular step may become slow for data with high dimensionality, especially when comparing the evidence for many different models. Using measurements collected by the ASDEX Upgrade Soft X-ray (SXR) diagnostic, we train a convolutional neural network (CNN) to map SXR tomographic projections to the corresponding GPT model whose evidence is highest. We then compare the networks results, and the time required to calculate them, with those obtained through analytical Bayesian formalism. In addition, we use the networks classifications to produce tomographic reconstructions of the plasma emissivity profile, whose quality we evaluate by comparing their projection into measurement space with the existing measurements themselves.
Deep learning has emerged as a technique of choice for rapid feature extraction across imaging disciplines, allowing rapid conversion of the data streams to spatial or spatiotemporal arrays of features of interest. However, applications of deep learn ing in experimental domains are often limited by the out-of-distribution drift between the experiments, where the network trained for one set of imaging conditions becomes sub-optimal for different ones. This limitation is particularly stringent in the quest to have an automated experiment setting, where retraining or transfer learning becomes impractical due to the need for human intervention and associated latencies. Here we explore the reproducibility of deep learning for feature extraction in atom-resolved electron microscopy and introduce workflows based on ensemble learning and iterative training to greatly improve feature detection. This approach both allows incorporating uncertainty quantification into the deep learning analysis and also enables rapid automated experimental workflows where retraining of the network to compensate for out-of-distribution drift due to subtle change in imaging conditions is substituted for a human operator or programmatic selection of networks from the ensemble. This methodology can be further applied to machine learning workflows in other imaging areas including optical and chemical imaging.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا