ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning Lie Structures & Applications to Physics

139   0   0.0 ( 0 )
 نشر من قبل Suvajit Majumder
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Classical and exceptional Lie algebras and their representations are among the most important tools in the analysis of symmetry in physical systems. In this letter we show how the computation of tensor products and branching rules of irreducible representations are machine-learnable, and can achieve relative speed-ups of orders of magnitude in comparison to the non-ML algorithms.

قيم البحث

اقرأ أيضاً

We describe how simple machine learning methods successfully predict geometric properties from Hilbert series (HS). Regressors predict embedding weights in projective space to ${sim}1$ mean absolute error, whilst classifiers predict dimension and Gor enstein index to $>90%$ accuracy with ${sim}0.5%$ standard error. Binary random forest classifiers managed to distinguish whether the underlying HS describes a complete intersection with high accuracies exceeding $95%$. Neural networks (NNs) exhibited success identifying HS from a Gorenstein ring to the same order of accuracy, whilst generation of fake HS proved trivial for NNs to distinguish from those associated to the three-dimensional Fano varieties considered.
We investigate a new structure for machine learning classifiers applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results.
98 - Dimitri Bourilkov 2019
The many ways in which machine and deep learning are transforming the analysis and simulation of data in particle physics are reviewed. The main methods based on boosted decision trees and various types of neural networks are introduced, and cutting- edge applications in the experimental and theoretical/phenomenological domains are highlighted. After describing the challenges in the application of these novel analysis techniques, the review concludes by discussing the interactions between physics and machine learning as a two-way street enriching both disciplines and helping to meet the present and future challenges of data-intensive science at the energy and intensity frontiers.
378 - Daniel Ratner 2020
This pair of CAS lectures gives an introduction for accelerator physics students to the framework and terminology of machine learning (ML). We start by introducing the language of ML through a simple example of linear regression, including a probabil istic perspective to introduce the concepts of maximum likelihood estimation (MLE) and maximum a priori (MAP) estimation. We then apply the concepts to examples of neural networks and logistic regression. Next we introduce non-parametric models and the kernel method and give a brief introduction to two other machine learning paradigms, unsupervised and reinforcement learning. Finally we close with example applications of ML at a free-electron laser.
Based on the BioBricks standard, restriction synthesis is a novel catabolic iterative DNA synthesis method that utilizes endonucleases to synthesize a query sequence from a reference sequence. In this work, the reference sequence is built from shorte r subsequences by classifying them as applicable or inapplicable for the synthesis method using three different machine learning methods: Support Vector Machines (SVMs), random forest, and Convolution Neural Networks (CNNs). Before applying these methods to the data, a series of feature selection, curation, and reduction steps are applied to create an accurate and representative feature space. Following these preprocessing steps, three different pipelines are proposed to classify subsequences based on their nucleotide sequence and other relevant features corresponding to the restriction sites of over 200 endonucleases. The sensitivity using SVMs, random forest, and CNNs are 94.9%, 92.7%, 91.4%, respectively. Moreover, each method scores lower in specificity with SVMs, random forest, and CNNs resulting in 77.4%, 85.7%, and 82.4%, respectively. In addition to analyzing these results, the misclassifications in SVMs and CNNs are investigated. Across these two models, different features with a derived nucleotide specificity visually contribute more to classification compared to other features. This observation is an important factor when considering new nucleotide sensitivity features for future studies.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا