ترغب بنشر مسار تعليمي؟ اضغط هنا

Evolution of the energy efficiency of LHCbs real-time processing

63   0   0.0 ( 0 )
 نشر من قبل Rainer Schwemmer
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The upgraded LHCb detector, due to start datataking in 2022, will have to process an average data rate of 4~TB/s in real time. Because LHCbs physics objectives require that the full detector information for every LHC bunch crossing is read out and made available for real-time processing, this bandwidth challenge is equivalent to that of the ATLAS and CMS HL-LHC software read-out, but deliverable five years earlier. Over the past six years, the LHCb collaboration has undertaken a bottom-up rewrite of its software infrastructure, pattern recognition, and selection algorithms to make them better able to efficiently exploit modern highly parallel computing architectures. We review the impact of this reoptimization on the energy efficiency of the real-time processing software and hardware which will be used for the upgrade of the LHCb detector. We also review the impact of the decision to adopt a hybrid computing architecture consisting of GPUs and CPUs for the real-time part of LHCbs future data processing. We discuss the implications of these results on how LHCbs real-time power requirements may evolve in the future, particularly in the context of a planned second upgrade of the detector.



قيم البحث

اقرأ أيضاً

62 - R. Aaij , S. Amato , L. Anderlini 2016
Upgrades to the LHCb computing infrastructure in the first long shutdown of the LHC have allowed for high quality decay information to be calculated by the software trigger making a separate offline event reconstruction unnecessary. Furthermore, the storage space of the triggered candidate is an order of magnitude smaller than the entire raw event that would otherwise need to be persisted. Tesla, following the LHCb renowned physicist naming convention, is an application designed to process the information calculated by the trigger, with the resulting output used to directly perform physics measurements.
Scientists are drawn to synchrotrons and accelerator based light sources because of their brightness, coherence and flux. The rate of improvement in brightness and detector technology has outpaced Moores law growth seen for computers, networks, and s torage, and is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality. Here we present an integrated software/algorithmic framework designed to capitalize on high throughput experiments, and describe the streamlined processing pipeline of ptychography data analysis. The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.
A method to estimate efficiency of event start time determination at BESIII is developed. This method estimates the efficiency at the event level by combining the efficiencies of various tracks ($e$, $mu$, $pi$, K, $p$, $gamma$) in a Bayesian way. Ef ficiencies results and difference between data and MC at the track level are presented in this paper. For a given physics channel, event start time efficiency and systematic error can be estimated following this method.
Over the last few years the GPGPU (General-Purpose computing on Graphics Processing Units) paradigm represented a remarkable development in the world of computing. Computing for High-Energy Physics is no exception: several works have demonstrated the effectiveness of the integration of GPU-based systems in high level trigger of different experiments. On the other hand the use of GPUs in the low level trigger systems, characterized by stringent real-time constraints, such as tight time budget and high throughput, poses several challenges. In this paper we focus on the low level trigger in the CERN NA62 experiment, investigating the use of real-time computing on GPUs in this synchronous system. Our approach aimed at harvesting the GPU computing power to build in real-time refined physics-related trigger primitives for the RICH detector, as the the knowledge of Cerenkov rings parameters allows to build stringent conditions for data selection at trigger level. Latencies of all components of the trigger chain have been analyzed, pointing out that networking is the most critical one. To keep the latency of data transfer task under control, we devised NaNet, an FPGA-based PCIe Network Interface Card (NIC) with GPUDirect capabilities. For the processing task, we developed specific multiple ring trigger algorithms to leverage the parallel architecture of GPUs and increase the processing throughput to keep up with the high event rate. Results obtained during the first months of 2016 NA62 run are presented and discussed.
Finding tracks downstream of the magnet at the earliest LHCb trigger level is not part of the baseline plan of the upgrade trigger, on account of the significant CPU time required to execute the search. Many long-lived particles, such as $K^0_S$ and strange baryons, decay after the vertex track detector, so that their reconstruction efficiency is limited. We present a study of the performance of a future innovative real-time tracking system based on FPGAs, developed within a R&D effort in the context of the LHCb Upgrade Ib (LHC Run~4), dedicated to the reconstruction of the particles downstream of the magnet in the forward tracking detector (Scintillating Fibre Tracker), that is capable of processing events at the full LHC collision rate of 30 MHz.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا