ترغب بنشر مسار تعليمي؟ اضغط هنا

CutLang: a cut-based HEP analysis description language and runtime interpreter

339   0   0.0 ( 0 )
 نشر من قبل Sezen Sekmen
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present CutLang, an analysis description language and runtime interpreter for high energy collider physics data analyses. An analysis description language is a declerative domain specific language that can express all elements of a data analysis in an easy and unambiguous way. A full-fledged human readable analysis description language, incorporating logical and mathematical expressions, would eliminate many programming difficulties and errors, consequently allowing the scientist to focus on the goal, but not on the tool. In this paper, we discuss the guiding principles and scope of the CutLang language, implementation of the CutLang runtime interpreter and the CutLang framework, and demonstrate an example of top pair reconstruction.

قيم البحث

اقرأ أيضاً

256 - Sezen Sekmen , Gokhan Unel 2018
This note introduces CutLang, a domain specific language that aims to provide a clear, human readable way to define analyses in high energy particle physics (HEP) along with an interpretation framework of that language. A proof of principle (PoP) imp lementation of the CutLang interpreter, achieved using C++ as a layer over the CERN data analysis framework ROOT, is presently available. This PoP implementation permits writing HEP analyses in an unobfuscated manner, as a set of commands in human readable text files, which are interpreted by the framework at runtime. We describe the main features of CutLang and illustrate its usage with two analysis examples. Initial experience with CutLang has shown that a just-in-time interpretation of a human readable HEP specific language is a practical alternative to analysis writing using compiled languages such as C++.
213 - Aytul Adiguzel 2020
The fifth edition of the Computing Applications in Particle Physics school was held on 3-7 February 2020, at Istanbul University, Turkey. This particular edition focused on the processing of simulated data from the Large Hadron Collider collisions us ing an Analysis Description Language and its runtime interpreter called CutLang. 24 undergraduate and 6 graduate students were initiated to collider data analysis during the school. After 3 days of lectures and exercises, the students were grouped into teams of 3 or 4 and each team was assigned an analysis publication from ATLAS or CMS experiments. After 1.5 days of independent study, each team was able to reproduce the assigned analysis using CutLang.
An analysis description language is a domain specific language capable of describing the contents of an LHC analysis in a standard and unambiguous way, independent of any computing framework. It is designed for use by anyone with an interest in, and knowledge of, LHC physics, i.e., experimentalists, phenomenologists and other enthusiasts. Adopting analysis description languages would bring numerous benefits for the LHC experimental and phenomenological communities ranging from analysis preservation beyond the lifetimes of experiments or analysis software to facilitating the abstraction, design, visualization, validation, combination, reproduction, interpretation and overall communication of the analysis contents. Here, we introduce the analysis description language concept and summarize the current efforts ongoing to develop such languages and tools to use them in LHC analyses.
Machine Learning (ML) will play a significant role in the success of the upcoming High-Luminosity LHC (HL-LHC) program at CERN. An unprecedented amount of data at the exascale will be collected by LHC experiments in the next decade, and this effort w ill require novel approaches to train and use ML models. In this paper, we discuss a Machine Learning as a Service pipeline for HEP (MLaaS4HEP) which provides three independent layers: a data streaming layer to read High-Energy Physics (HEP) data in their native ROOT data format; a data training layer to train ML models using distributed ROOT files; a data inference layer to serve predictions using pre-trained ML models via HTTP protocol. Such modular design opens up the possibility to train data at large scale by reading ROOT files from remote storage facilities, e.g. World-Wide LHC Computing Grid (WLCG) infrastructure, and feed the data to the users favorite ML framework. The inference layer implemented as TensorFlow as a Service (TFaaS) may provide an easy access to pre-trained ML models in existing infrastructure and applications inside or outside of the HEP domain. In particular, we demonstrate the usage of the MLaaS4HEP architecture for a physics use-case, namely the $tbar{t}$ Higgs analysis in CMS originally performed using custom made Ntuples. We provide details on the training of the ML model using distributed ROOT files, discuss the performance of the MLaaS and TFaaS approaches for the selected physics analysis, and compare the results with traditional methods.
Though statistical analyses are centered on research questions and hypotheses, current statistical analysis tools are not. Users must first translate their hypotheses into specific statistical tests and then perform API calls with functions and param eters. To do so accurately requires that users have statistical expertise. To lower this barrier to valid, replicable statistical analysis, we introduce Tea, a high-level declarative language and runtime system. In Tea, users express their study design, any parametric assumptions, and their hypotheses. Tea compiles these high-level specifications into a constraint satisfaction problem that determines the set of valid statistical tests, and then executes them to test the hypothesis. We evaluate Tea using a suite of statistical analyses drawn from popular tutorials. We show that Tea generally matches the choices of experts while automatically switching to non-parametric tests when parametric assumptions are not met. We simulate the effect of mistakes made by non-expert users and show that Tea automatically avoids both false negatives and false positives that could be produced by the application of incorrect statistical tests.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا