ترغب بنشر مسار تعليمي؟ اضغط هنا

Tau trigger at the ATLAS experiment

144   0   0.0 ( 0 )
 نشر من قبل Artur Kalinowski
 تاريخ النشر 2008
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Many theoretical models, like the Standard Model or SUSY at large tan(beta), predict Higgs bosons or new particles which decay more abundantly to final states including tau leptons than to other leptons. At the energy scale of the LHC, the identification of tau leptons, in particular in the hadronic decay mode, will be a challenging task due to an overwhelming QCD background which gives rise to jets of particles that can be hard to distinguish from hadronic tau decays. Equipped with excellent tracking and calorimetry, the ATLAS experiment has developed tau identification tools capable of working at the trigger level. This contribution presents tau trigger algorithms which exploit the main features of hadronic tau decays and describes the current tau trigger commissioning activities.



قيم البحث

اقرأ أيضاً

A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potentia l for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.
203 - Romain Madar 2010
The article describes the identification of hadronically decaying tau leptons in ppbar collisions at 1.96 TeV collected by the DZero detector at the Fermilab Tevatron. After a brief description of the motivations and the challenges of considering tau leptons in high energy hadronic collisions, details of the tau reconstruction and identification will be discussed. The challenges associated for tau energy measurements in an hadronic environment will be presented including approaches to deal with such measurements.
Given the extremely high output rate foreseen at LHC and the general-purpose nature of ATLAS experiment, an efficient and flexible way to select events in the High Level Trigger is needed. An extremely flexible solution is proposed that allows for ea rly rejection of unwanted events and an easily configurable way to choose algorithms and to specify the criteria for trigger decisions. It is implemented in the standard ATLAS object-oriented software framework, Athena. The early rejection is achieved by breaking the decision process down into sequential steps. The configuration of each step defines sequences of algorithms which should be used to process the data, and trigger menus that define which physics signatures must be satisfied to continue on to the next step, and ultimately to accept the event. A navigation system has been built on top of the standard Athena transient store (StoreGate) to link the event data together in a tree-like structure. This is fundamental to the seeding mechanism, by which data from one step is presented to the next. The design makes it straightforward to utilize existing off-line reconstruction data classes and algorithms when they are suitable
165 - T. Kono 2008
The ATLAS trigger system is based on three levels of event selection that select the physics of interest from an initial bunch-crossing rate of 40 MHz. During nominal LHC operations at a luminosity of 10^34 cm^-2 s^-1, decisions must be taken every 2 5 ns with each bunch crossing containing about 23 interactions. The selections in the three trigger levels must provide sufficient rejection to reduce the rate down to 200 Hz, compatible with the offline computing power and storage capacity. The LHC is expected to begin operations in summer 2008 with a peak luminosity of 10^31 cm^-2 s^-1 with far fewer bunches than nominal running, but quickly ramp up to higher luminosities. Hence, we need to deploy trigger selections that can adapt to the changing beam conditions preserving the interesting physics and detector requirements that may vary with these conditions. We present the status of the preparation of the trigger menu for the early data-taking showing how we plan to deploy the trigger system from the first collision to the nominal luminosity. We also show expected rates and physics performance obtained from simulated data.
Trigger and data acquisition (TDAQ) systems for modern HEP experiments are composed of thousands of hardware and software components depending on each other in a very complex manner. Typically, such systems are operated by non-expert shift operators, which are not aware of system functionality details. It is therefore necessary to help the operator to control the system and to minimize system down-time by providing knowledge-based facilities for automatic testing and verification of system components and also for error diagnostics and recovery. For this purpose, a verification and diagnostic framework was developed in the scope of ATLAS TDAQ. The verification functionality of the framework allows developers to configure simple low-level tests for any component in a TDAQ configuration. The test can be configured as one or more processes running on different hosts. The framework organizes tests in sequences, using knowledge about components hierarchy and dependencies, and allowing the operator to verify the functionality of any subset of the system. The diagnostics functionality includes the possibility to analyze the test results and diagnose detected errors, e.g. by starting additional tests and understanding reasons of failures. A conclusion about system functionality, error diagnosis and recovery advice are presented to the operator in a GUI. The current implementation uses the CLIPS expert system shell for knowledge representation and reasoning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا