No Arabic abstract
The main b-physics trigger algorithm used by the LHCb experiment is the so-called topological trigger. The topological trigger selects vertices which are a) detached from the primary proton-proton collision and b) compatible with coming from the decay of a b-hadron. In the LHC Run 1, this trigger, which utilized a custom boosted decision tree algorithm, selected a nearly 100% pure sample of b-hadrons with a typical efficiency of 60-70%; its output was used in about 60% of LHCb papers. This talk presents studies carried out to optimize the topological trigger for LHC Run 2. In particular, we have carried out a detailed comparison of various machine learning classifier algorithms, e.g., AdaBoost, MatrixNet and neural networks. The topological trigger algorithm is designed to select all interesting decays of b-hadrons, but cannot be trained on every such decay. Studies have therefore been performed to determine how to optimize the performance of the classification algorithm on decays not used in the training. Methods studied include cascading, ensembling and blending techniques. Furthermore, novel boosting techniques have been implemented that will help reduce systematic uncertainties in Run 2 measurements. We demonstrate that the reoptimized topological trigger is expected to significantly improve on the Run 1 performance for a wide range of b-hadron decays.
The LHCb experiment will operate at a luminosity of $2times10^{33}$ cm$^{-2}$s$^{-1}$ during LHC Run 3. At this rate the present readout and hardware Level-0 trigger become a limitation, especially for fully hadronic final states. In order to maintain a high signal efficiency the upgraded LHCb detector will deploy two novel concepts: a triggerless readout and a full software trigger.
The LHCb experiment stores around $10^{11}$ collision events per year. A typical physics analysis deals with a final sample of up to $10^7$ events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.
A very compact architecture has been developed for the first level Muon Trigger of the LHCb experiment that processes 40 millions of proton-proton collisions per second. For each collision, it receives 3.2 kBytes of data and it finds straight tracks within a 1.2 microseconds latency. The trigger implementation is massively parallel, pipelined and fully synchronous with the LHC clock. It relies on 248 high density Field Programable Gate arrays and on the massive use of multigigabit serial link transceivers embedded inside FPGAs.
The LHCb Experiment is preparing a detector upgrade fully exploit the flavour physics potential of the LHC. The whole detector will be read out at the full collision rate and the online event selection will be performed by a software trigger. This will increase the event yields by a facto 10 for muonic and a factor 20 for hadronic final states. Research towards the upgrade has started with the target to install the detector in 2018.
The performance of the LHCb Muon system and its stability across the full 2010 data taking with LHC running at ps = 7 TeV energy is studied. The optimization of the detector setting and the time calibration performed with the first collisions delivered by LHC is described. Particle rates, measured for the wide range of luminosities and beam operation conditions experienced during the run, are compared with the values expected from simulation. The space and time alignment of the detectors, chamber efficiency, time resolution and cluster size are evaluated. The detector performance is found to be as expected from specifications or better. Notably the overall efficiency is well above the design requirements