ﻻ يوجد ملخص باللغة العربية
The LHCb experiment stores around $10^{11}$ collision events per year. A typical physics analysis deals with a final sample of up to $10^7$ events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.
The main b-physics trigger algorithm used by the LHCb experiment is the so-called topological trigger. The topological trigger selects vertices which are a) detached from the primary proton-proton collision and b) compatible with coming from the deca
The LHCb experiment will operate at a luminosity of $2times10^{33}$ cm$^{-2}$s$^{-1}$ during LHC Run 3. At this rate the present readout and hardware Level-0 trigger become a limitation, especially for fully hadronic final states. In order to maintai
Hundreds of millions of network cameras have been installed throughout the world. Each is capable of providing a vast amount of real-time data. Analyzing the massive data generated by these cameras requires significant computational resources and the
This paper presents the design of the LHCb trigger and its performance on data taken at the LHC in 2011. A principal goal of LHCb is to perform flavour physics measurements, and the trigger is designed to distinguish charm and beauty decays from the
A very compact architecture has been developed for the first level Muon Trigger of the LHCb experiment that processes 40 millions of proton-proton collisions per second. For each collision, it receives 3.2 kBytes of data and it finds straight tracks