ترغب بنشر مسار تعليمي؟ اضغط هنا

Software Challenges For HL-LHC Data Analysis

91   0   0.0 ( 0 )
 نشر من قبل Philippe Canal
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The high energy physics community is discussing where investment is needed to prepare software for the HL-LHC and its unprecedented challenges. The ROOT project is one of the central software players in high energy physics since decades. From its experience and expectations, the ROOT team has distilled a comprehensive set of areas that should see research and development in the context of data analysis software, for making best use of HL-LHCs physics potential. This work shows what these areas could be, why the ROOT team believes investing in them is needed, which gains are expected, and where related work is ongoing. It can serve as an indication for future research proposals and cooperations.



قيم البحث

اقرأ أيضاً

The GERDA and Majorana experiments will search for neutrinoless double-beta decay of germanium-76 using isotopically enriched high-purity germanium detectors. Although the experiments differ in conceptual design, they share many aspects in common, an d in particular will employ similar data analysis techniques. The collaborations are jointly developing a C++ software library, MGDO, which contains a set of data objects and interfaces to encapsulate, store and manage physical quantities of interest, such as waveforms and high-purity germanium detector geometries. These data objects define a common format for persistent data, whether it is generated by Monte Carlo simulations or an experimental apparatus, to reduce code duplication and to ease the exchange of information between detector systems. MGDO also includes general-purpose analysis tools that can be used for the processing of measured or simulated digital signals. The MGDO design is based on the Object-Oriented programming paradigm and is very flexible, allowing for easy extension and customization of the components. The tools provided by the MGDO libraries are used by both GERDA and Majorana.
82 - R. A. Ewings , A. Buts , M. D. Le 2016
The HORACE suite of programs has been developed to work with large multiple-measurement data sets collected from time-of-flight neutron spectrometers equipped with arrays of position-sensitive detectors. The software allows exploratory studies of the four dimensions of reciprocal space and excitation energy to be undertaken, enabling multi-dimensional subsets to be visualized, algebraically manipulated, and models for the scattering to simulated or fitted to the data. The software is designed to be an extensible framework, thus allowing user-customized operations to be performed on the data. Examples of the use of its features are given for measurements exploring the spin waves of the simple antiferromagnet RbMnF$_{3}$ and ferromagnetic iron, and the phonons in URu$_{2}$Si$_{2}$.
In high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniqu es have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent years. Statisticians have found new ways to tune and to combine classifiers to further gain in performance. Integrated into the analysis framework ROOT, TMVA is a toolkit which hosts a large variety of multivariate classification algorithms. Training, testing, performance evaluation and application of all available classifiers is carried out simultaneously via user-friendly interfaces. With version 4, TMVA has been extended to multivariate regression of a real-valued target vector. Regression is invoked through the same user interfaces as classification. TMVA 4 also features more flexible data handling allowing one to arbitrarily form combined MVA methods. A generalised boosting method is the first realisation benefiting from the new framework.
109 - Dimitri Bourilkov 2004
The Collaborative Analysis Versioning Environment System (CAVES) project concentrates on the interactions between users performing data and/or computing intensive analyses on large data sets, as encountered in many contemporary scientific disciplines . In modern science increasingly larger groups of researchers collaborate on a given topic over extended periods of time. The logging and sharing of knowledge about how analyses are performed or how results are obtained is important throughout the lifetime of a project. Here is where virtual data concepts play a major role. The ability to seamlessly log, exchange and reproduce results and the methods, algorithms and computer programs used in obtaining them enhances in a qualitative way the level of collaboration in a group or between groups in larger organizations. The CAVES project takes a pragmatic approach in assessing the needs of a community of scientists by building series of prototypes with increasing sophistication. In extending the functionality of existing data analysis packages with virtual data capabilities these prototypes provide an easy and habitual entry point for researchers to explore virtual data concepts in real life applications and to provide valuable feedback for refining the system design. The architecture is modular based on Web, Grid and other services which can be plugged in as desired. As a proof of principle we build a first system by extending the very popular data analysis framework ROOT, widely used in high energy physics and other fields, making it virtual data enabled.
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes theorem and its applications. In particular we discuss about how to calculate simpl e and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested fit to calculate the different probability distributions and other related quantities. Nested fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا