ترغب بنشر مسار تعليمي؟ اضغط هنا

Making RooFit Ready for Run 3

101   0   0.0 ( 0 )
 نشر من قبل Stephan Hageboeck
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

RooFit and RooStats, the toolkits for statistical modelling in ROOT, are used in most searches and measurements at the Large Hadron Collider. The data to be collected in Run 3 will enable measurements with higher precision and models with larger complexity, but also require faster data processing. In this work, first results on modernising RooFits collections, restructuring data flow and vectorising likelihood fits in RooFit will be discussed. These improvements will enable the LHC experiments to process larger datasets without having to compromise with respect to model complexity, as fitting times would increase significantly with the large datasets to be expected in Run 3.



قيم البحث

اقرأ أيضاً

89 - Stephan Hageboeck 2020
RooFit and RooStats, the toolkits for statistical modelling in ROOT, are used in most searches and measurements at the Large Hadron Collider as well as at $B$ factories. Larger datasets to be collected at e.g. the High-Luminosity LHC will enable meas urements with higher precision, but will require faster data processing to keep fitting times stable. In this work, a simplification of RooFits interfaces and a redesign of its internal dataflow is presented. Interfaces are being extended to look and feel more STL-like to be more accessible both from C++ and Python to improve interoperability and ease of use, while maintaining compatibility with old code. The redesign of the dataflow improves cache locality and data loading, and can be used to process batches of data with vectorised SIMD computations. This reduces the time for computing unbinned likelihoods by a factor four to 16. This will allow to fit larger datasets of the future in the same time or faster than todays fits.
344 - M. Krivda , D. Evans , K.L. Graham 2017
The ALICE Central Trigger Processor (CTP) is going to be upgraded for LHC Run 3 with completely new hardware and a new Trigger and Timing Control (TTC-PON) system based on a Passive Optical Network (PON) system. The new trigger system has been design ed as dead time free and able to transmit trigger data at 9.6 Gbps. A new universal trigger board has been designed, where by changing the FMC card, it can function as a CTP or as a LTU. It is based on the Xilinx Kintex Ultrascale FPGA and upgraded TTC-PON. The new trigger system and the prototype of the trigger board will be presented.
The LHCb (Large Hadron Collider beauty) experiment is designed to study differences between particles and anti-particles as well as very rare decays in the charm and beauty sector at the LHC (Large Hadron Collider). The detector will be upgraded in 2 019 and a new trigger-less readout system will be implemented in order to significantly increase its efficiency and take advantage of the increased machine luminosity. In the upgraded system, both event building and event filtering will be performed in software for all the data produced in every bunch-crossing of the LHC. In order to transport the full data rate of 32 Tb/s we will use custom FPGA readout boards (PCIe40) and state of the art off-the-shelf network technologies. The full event building system will require around 500 nodes interconnected together. From a networking point of view, event building traffic has an all-to-all pattern, therefore it tends to create high network congestion. In order to maximize the link utilization different techniques can be adopted in various areas like traffic shaping, network topology and routing optimization. The size of the system makes it very difficult to test at production scale, before the actual procurement. We resort therefore to network simulations as a powerful tool for finding the optimal configuration. We will present an accurate low level description of an InfiniBand based network with event building like traffic. We will show comparison between simulated and real systems and how changes in the input parameters affect performances.
Hydra is a header-only, templated and C++11-compliant framework designed to perform the typical bottleneck calculations found in common HEP data analyses on massively parallel platforms. The framework is implemented on top of the C++11 Standard Libra ry and a variadic version of the Thrust library and is designed to run on Linux systems, using OpenMP, CUDA and TBB enabled devices. This contribution summarizes the main features of Hydra. A basic description of the overall design, functionality and user interface is provided, along with some code examples and measurements of performance.
Our predictions for particle physics processes are realized in a chain of complex simulators. They allow us to generate high-fidelity simulated data, but they are not well-suited for inference on the theory parameters with observed data. We explain w hy the likelihood function of high-dimensional LHC data cannot be explicitly evaluated, why this matters for data analysis, and reframe what the field has traditionally done to circumvent this problem. We then review new simulation-based inference methods that let us directly analyze high-dimensional data by combining machine learning techniques and information from the simulator. Initial studies indicate that these techniques have the potential to substantially improve the precision of LHC measurements. Finally, we discuss probabilistic programming, an emerging paradigm that lets us extend inference to the latent process of the simulator.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا