No Arabic abstract
Since the start-up of the LHC end of 2009, the trigger commissioning is in full swing. The ATLAS trigger system is divided into three levels: the hardware-based first level trigger, and the software-based second level trigger and Event Filter, collectively referred to as the High Level Trigger (HLT). Initially, events have been selected online based on the Level-1 selections with the HLT algorithms run but not rejecting any events. This has been an important step in the commissioning of these triggers to ensure their correct functioning and subsequently to enable the HLT selections. Due to increasing LHC luminosity and the large QCD cross section, this has been a vital step to select leptons from J/$Psi$, bottom, charm, W and Z decays. This presentation gives an overview of the trigger performance of the electron and photon selection. Comparisons of the online selection variables with the offline reconstruction are shown as well as comparisons of data with MC simulations on which the current selection tuning is performed.
The ATLAS trigger has been used very successfully to collect collision data during 2009 and 2010 LHC running at centre of mass energies of 900 GeV, 2.36 TeV, and 7 TeV. This paper presents the ongoing work to commission the ATLAS trigger with proton collisions, including an overview of the performance of the trigger based on extensive online running. We describe how the trigger has evolved with increasing LHC luminosity and give a brief overview of plans for forthcoming LHC running.
The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.
The Fast Tracker (FTK) is a proposed upgrade to the ATLAS trigger system that will operate at full Level-1 output rates and provide high quality tracks reconstructed over the entire detector by the start of processing in Level-2. FTK solves the combinatorial challenge inherent to tracking by exploiting the massive parallelism of Associative Memories (AM) that can compare inner detector hits to millions of pre-calculated patterns simultaneously. The tracking problem within matched patterns is further simplified by using pre-computed linearized fitting constants and leveraging fast DSPs in modern commercial FPGAs. Overall, FTK is able to compute the helix parameters for all tracks in an event and apply quality cuts in approximately one millisecond. By employing a pipelined architecture, FTK is able to continuously operate at Level-1 rates without deadtime. The system design is defined and studied using ATLAS full simulation. Reconstruction quality is evaluated for single muon events with zero pileup, as well as WH events at the LHC design luminosity. FTK results are compared with the tracking capability of an offline algorithm.
To cope with the enhanced luminosity at the Large Hadron Collider (LHC) in 2021, the ATLAS collaboration is planning a major detector upgrade. As a part of this, the Level 1 trigger based on calorimeter data will be upgraded to exploit the fine granularity readout using a new system of Feature EXtractors (FEX), which each reconstruct different physics objects for the trigger selection. The jet FEX (jFEX) system is conceived to provide jet identification (including large area jets) and measurements of global variables within a latency budget of less then 400ns. It consists of 6 modules. A single jFEX module is an ATCA board with 4 large FPGAs of the Xilinx Ultrascale+ family, that can digest a total input data rate of ~3.6 Tb/s using up to 120 Multi Gigabit Transceiver (MGT), 24 electrical optical devices, board control and power on the mezzanines to allow flexibility in upgrading controls functions and components without affecting the main board. The 24-layers stack-up was carefully designed to preserve the signal integrity in a very densely populated high speed signal board selecting MEGTRON6 as the most suitable PCB material. This contribution reports on the design challenges and the test results of the jFEX prototypes. In particular the fully assembled final prototype has been tested up to 12.8 Gb/s in house and in integrated tests at CERN. The full jFEX system will be produced by the end of 2018 to allow for installation and commissioning to be completed before LHC restarts in March 2021.
The ATLAS detector at CERNs Large Hadron Collider will be exposed to proton-proton collisions from beams crossing at 40 MHz that have to be reduced to the few 100 Hz allowed by the storage systems. A three-level trigger system has been designed to achieve this goal. We describe the configuration system under construction for the ATLAS trigger chain. It provides the trigger system with all the parameters required for decision taking and to record its history. The same system configures the event reconstruction, Monte Carlo simulation and data analysis, and provides tools for accessing and manipulating the configuration data in all contexts.