ترغب بنشر مسار تعليمي؟ اضغط هنا

Verification and Diagnostics Framework in ATLAS Trigger/DAQ

56   0   0.0 ( 0 )
 نشر من قبل Andrei Kazarov
 تاريخ النشر 2003
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Trigger and data acquisition (TDAQ) systems for modern HEP experiments are composed of thousands of hardware and software components depending on each other in a very complex manner. Typically, such systems are operated by non-expert shift operators, which are not aware of system functionality details. It is therefore necessary to help the operator to control the system and to minimize system down-time by providing knowledge-based facilities for automatic testing and verification of system components and also for error diagnostics and recovery. For this purpose, a verification and diagnostic framework was developed in the scope of ATLAS TDAQ. The verification functionality of the framework allows developers to configure simple low-level tests for any component in a TDAQ configuration. The test can be configured as one or more processes running on different hosts. The framework organizes tests in sequences, using knowledge about components hierarchy and dependencies, and allowing the operator to verify the functionality of any subset of the system. The diagnostics functionality includes the possibility to analyze the test results and diagnose detected errors, e.g. by starting additional tests and understanding reasons of failures. A conclusion about system functionality, error diagnosis and recovery advice are presented to the operator in a GUI. The current implementation uses the CLIPS expert system shell for knowledge representation and reasoning.



قيم البحث

اقرأ أيضاً

Many theoretical models, like the Standard Model or SUSY at large tan(beta), predict Higgs bosons or new particles which decay more abundantly to final states including tau leptons than to other leptons. At the energy scale of the LHC, the identifica tion of tau leptons, in particular in the hadronic decay mode, will be a challenging task due to an overwhelming QCD background which gives rise to jets of particles that can be hard to distinguish from hadronic tau decays. Equipped with excellent tracking and calorimetry, the ATLAS experiment has developed tau identification tools capable of working at the trigger level. This contribution presents tau trigger algorithms which exploit the main features of hadronic tau decays and describes the current tau trigger commissioning activities.
Given the extremely high output rate foreseen at LHC and the general-purpose nature of ATLAS experiment, an efficient and flexible way to select events in the High Level Trigger is needed. An extremely flexible solution is proposed that allows for ea rly rejection of unwanted events and an easily configurable way to choose algorithms and to specify the criteria for trigger decisions. It is implemented in the standard ATLAS object-oriented software framework, Athena. The early rejection is achieved by breaking the decision process down into sequential steps. The configuration of each step defines sequences of algorithms which should be used to process the data, and trigger menus that define which physics signatures must be satisfied to continue on to the next step, and ultimately to accept the event. A navigation system has been built on top of the standard Athena transient store (StoreGate) to link the event data together in a tree-like structure. This is fundamental to the seeding mechanism, by which data from one step is presented to the next. The design makes it straightforward to utilize existing off-line reconstruction data classes and algorithms when they are suitable
59 - Catrin Bernius 2017
The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a s oftware-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for a wide variety of physics signatures with high efficiency. The trigger selection capabilities have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up during Run-2. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event rates and in a reduction of fake candidate particles. A new hardware system, designed to analyze event-topologies, supports a more refined event selection at the Level-1. A hardware-based high-rate track reconstruction is currently being commissioned. Together with a re-design of the high-level trigger to deploy more offline-like reconstruction techniques, these changes improve the performance of the trigger selection turn-on and efficiency to nearly that of the offline reconstruction. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remains an ongoing endeavor. The large number of pile-up events is one of the most prominent challenges. This report gives a short review the ATLAS trigger system and its performance in 2015/16 before describing the significant improvements in selection sensitivity and pile-up robustness, which we implemented in preparation for the expected highest ever luminosities of the 2017/18 LHC.
A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potentia l for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.
160 - T. Kono 2008
The ATLAS trigger system is based on three levels of event selection that select the physics of interest from an initial bunch-crossing rate of 40 MHz. During nominal LHC operations at a luminosity of 10^34 cm^-2 s^-1, decisions must be taken every 2 5 ns with each bunch crossing containing about 23 interactions. The selections in the three trigger levels must provide sufficient rejection to reduce the rate down to 200 Hz, compatible with the offline computing power and storage capacity. The LHC is expected to begin operations in summer 2008 with a peak luminosity of 10^31 cm^-2 s^-1 with far fewer bunches than nominal running, but quickly ramp up to higher luminosities. Hence, we need to deploy trigger selections that can adapt to the changing beam conditions preserving the interesting physics and detector requirements that may vary with these conditions. We present the status of the preparation of the trigger menu for the early data-taking showing how we plan to deploy the trigger system from the first collision to the nominal luminosity. We also show expected rates and physics performance obtained from simulated data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا