No Arabic abstract
In order to achieve the data rates proposed for the future Run 3 upgrade of the LHCb detector, new processing models must be developed to deal with the increased throughput. For this reason, we aim to investigate the feasibility of purely data-driven holistic methods, with the constraint of introducing minimal computational overhead, hence using only raw detector information. These filters should be unbiased - having a neutral effect with respect to the studied physics channels. In particular, the use of machine learning based methods seems particularly suitable, potentially providing a natural formulation for heuristic-free, unbiased filters whose objective would be to optimize between throughput and bandwidth.
The high instantaneous luminosities expected following the upgrade of the Large Hadron Collider (LHC) to the High Luminosity LHC (HL-LHC) pose major experimental challenges for the CMS experiment. A central component to allow efficient operation under these conditions is the reconstruction of charged particle trajectories and their inclusion in the hardware-based trigger system. There are many challenges involved in achieving this: a large input data rate of about 20--40 Tb/s; processing a new batch of input data every 25 ns, each consisting of about 15,000 precise position measurements and rough transverse momentum measurements of particles (stubs); performing the pattern recognition on these stubs to find the trajectories; and producing the list of trajectory parameters within 4 $mu,$s. This paper describes a proposed solution to this problem, specifically, it presents a novel approach to pattern recognition and charged particle trajectory reconstruction using an all-FPGA solution. The results of an end-to-end demonstrator system, based on Xilinx Virtex-7 FPGAs, that meets timing and performance requirements are presented along with a further improved, optimized version of the algorithm together with its corresponding expected performance.
A study on the use of a machine learning algorithm for the level 1 trigger decision in the JUNO experiment ispresented. JUNO is a medium baseline neutrino experiment in construction in China, with the main goal of determining the neutrino mass hierarchy. A large liquid scintillator (LS)volume will detect the electron antineutrinos issued from nuclear reactors. The LS detector is instrumented by around 20000 large photomultiplier tubes. The hit information from each PMT will be collected into a center trigger unit for the level 1 trigger decision. The current trigger algorithm used to select a neutrino signal event is based on a fast vertex reconstruction. We propose to study an alternative level 1 (L1) trigger in order to achieve a similar performance as the vertex fitting trigger but with less logic resources by using firmware implemented machine learning model at the L1 trigger level. We treat the trigger decision as a classification problem and train a Multi-Layer Perceptron (MLP)model to distinguish the signal events with an energy higher than a certain threshold from noise events. We use JUNO software to generate datasets which include 100K physics events with noise and 100K pure noise events coming from PMT dark noise.For events with energy higher than 100 keV, the L1 trigger based on the converged MLP model can achieve an efficiency higher than 99%. After the training performed on simulations,we successfully implemented the trained model into a Kintex 7FPGA. We present the technical details of the neural network development and training, as well as its implementation in the hardware with the FPGA programming. Finally the performance of the L1 trigger MLP implementation is discussed.
This proceedings describes the XFT stereo upgrade for the CDF Level 2 trigger system. Starting with the stereo finder boards, up to the XFT stereo track algorithim implementation in the Level 2 PC. This proceedings will discuss the effectiveness of the Level 2 Stereo track algorithm at achieving reduced trigger rates with high efficiencies during high luminosity running.
The LHCb experiment will operate at a luminosity of $2times10^{33}$ cm$^{-2}$s$^{-1}$ during LHC Run 3. At this rate the present readout and hardware Level-0 trigger become a limitation, especially for fully hadronic final states. In order to maintain a high signal efficiency the upgraded LHCb detector will deploy two novel concepts: a triggerless readout and a full software trigger.
The main b-physics trigger algorithm used by the LHCb experiment is the so-called topological trigger. The topological trigger selects vertices which are a) detached from the primary proton-proton collision and b) compatible with coming from the decay of a b-hadron. In the LHC Run 1, this trigger, which utilized a custom boosted decision tree algorithm, selected a nearly 100% pure sample of b-hadrons with a typical efficiency of 60-70%; its output was used in about 60% of LHCb papers. This talk presents studies carried out to optimize the topological trigger for LHC Run 2. In particular, we have carried out a detailed comparison of various machine learning classifier algorithms, e.g., AdaBoost, MatrixNet and neural networks. The topological trigger algorithm is designed to select all interesting decays of b-hadrons, but cannot be trained on every such decay. Studies have therefore been performed to determine how to optimize the performance of the classification algorithm on decays not used in the training. Methods studied include cascading, ensembling and blending techniques. Furthermore, novel boosting techniques have been implemented that will help reduce systematic uncertainties in Run 2 measurements. We demonstrate that the reoptimized topological trigger is expected to significantly improve on the Run 1 performance for a wide range of b-hadron decays.