No Arabic abstract
As the Tevatron luminosity increases sophisticated selections are required to be efficient in selecting rare events among a very huge background. To cope with this problem, CDF has pushed the offline calorimeter algorithm reconstruction resolution up to Level 2 and, when possible, even up to Level 1, increasing efficiency and, at the same time, keeping under control the rates. The CDF Run II Level 2 calorimeter trigger is implemented in hardware and is based on a simple algorithm that was used in Run I. This system has worked well for Run II at low luminosity. As the Tevatron instantaneous luminosity increases, the limitation due to this simple algorithm starts to become clear: some of the most important jet and MET (Missing ET) related triggers have large growth terms in cross section at higher luminosity. In this paper, we present an upgrade of the Level 2 Calorimeter system which makes the calorimeter trigger tower information available directly to a CPU allowing more sophisticated algorithms to be implemented in software. Both Level 2 jets and MET can be made nearly equivalent to offline quality, thus significantly improving the performance and flexibility of the jet and MET related triggers. However in order to fully take advantage of the new L2 triggering capabilities having at Level 1 the same L2 MET resolution is necessary. The new Level-1 MET resolution is calculated by dedicated hardware. This paper describes the design, the hardware and software implementation and the performance of the upgraded calorimeter trigger system both at Level 2 and Level 1.
The CMS Level-1 calorimeter trigger is being upgraded in two stages to maintain performance as the LHC increases pile-up and instantaneous luminosity in its second run. In the first stage, improved algorithms including event-by-event pile-up corrections are used. New algorithms for heavy ion running have also been developed. In the second stage, higher granularity inputs and a time-multiplexed approach allow for improved position and energy resolution. Data processing in both stages of the upgrade is performed with new, Xilinx Virtex-7 based AMC cards.
The ALICE experiment at the LHC is equipped with an electromagnetic calorimeter (EMCal) designed to enhance its capabilities for jet measurement. In addition, the EMCal enables triggering on high energy jets. Based on the previous development made for the Photon Spectrometer (PHOS) level-0 trigger, a specific electronic upgrade was designed in order to allow fast triggering on high energy jets (level-1). This development was made possible by using the latest generation of FPGAs which can deal with the instantaneous incoming data rate of 26 Gbit/s and process it in less than 4 {mu}s.
The high instantaneous luminosities expected following the upgrade of the Large Hadron Collider (LHC) to the High Luminosity LHC (HL-LHC) pose major experimental challenges for the CMS experiment. A central component to allow efficient operation under these conditions is the reconstruction of charged particle trajectories and their inclusion in the hardware-based trigger system. There are many challenges involved in achieving this: a large input data rate of about 20--40 Tb/s; processing a new batch of input data every 25 ns, each consisting of about 15,000 precise position measurements and rough transverse momentum measurements of particles (stubs); performing the pattern recognition on these stubs to find the trajectories; and producing the list of trajectory parameters within 4 $mu,$s. This paper describes a proposed solution to this problem, specifically, it presents a novel approach to pattern recognition and charged particle trajectory reconstruction using an all-FPGA solution. The results of an end-to-end demonstrator system, based on Xilinx Virtex-7 FPGAs, that meets timing and performance requirements are presented along with a further improved, optimized version of the algorithm together with its corresponding expected performance.
The DZERO experiment located at Fermilab has recently started RunII with an upgraded detector. The RunII physics program requires the Data Acquisition to readout the detector at a rate of 1 KHz. Events fragments, totaling 250 KB, are readout from approximately 60 front end crates and sent to a particular farm node for Level 3 Trigger processing. A scalable system, capable of complex event routing, has been designed and implemented based on commodity components: VMIC 7750 Single Board Computers for readout, a Cisco 6509 switch for data flow, and close to 100 Linux-based PCs for high-level event filtering.
A study on the use of a machine learning algorithm for the level 1 trigger decision in the JUNO experiment ispresented. JUNO is a medium baseline neutrino experiment in construction in China, with the main goal of determining the neutrino mass hierarchy. A large liquid scintillator (LS)volume will detect the electron antineutrinos issued from nuclear reactors. The LS detector is instrumented by around 20000 large photomultiplier tubes. The hit information from each PMT will be collected into a center trigger unit for the level 1 trigger decision. The current trigger algorithm used to select a neutrino signal event is based on a fast vertex reconstruction. We propose to study an alternative level 1 (L1) trigger in order to achieve a similar performance as the vertex fitting trigger but with less logic resources by using firmware implemented machine learning model at the L1 trigger level. We treat the trigger decision as a classification problem and train a Multi-Layer Perceptron (MLP)model to distinguish the signal events with an energy higher than a certain threshold from noise events. We use JUNO software to generate datasets which include 100K physics events with noise and 100K pure noise events coming from PMT dark noise.For events with energy higher than 100 keV, the L1 trigger based on the converged MLP model can achieve an efficiency higher than 99%. After the training performed on simulations,we successfully implemented the trained model into a Kintex 7FPGA. We present the technical details of the neural network development and training, as well as its implementation in the hardware with the FPGA programming. Finally the performance of the L1 trigger MLP implementation is discussed.