ترغب بنشر مسار تعليمي؟ اضغط هنا

The Evolution of FTK, a Real-Time Tracker for Hadron Collider Experiments

56   0   0.0 ( 0 )
 نشر من قبل Alberto Annovi
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English
 تأليف A. Annovi




اسأل ChatGPT حول البحث

We describe the architecture evolution of the highly-parallel dedicated processor FTK, which is driven by the simulation of LHC events at high luminosity (1034 cm-2 s-1). FTK is able to provide precise on-line track reconstruction for future hadronic collider experiments. The processor, organized in a two-tiered pipelined architecture, execute very fast algorithms based on the use of a large bank of pre-stored patterns of trajectory points (first tier) in combination with full resolution track fitting to refine pattern recognition and to determine off-line quality track parameters. We describe here how the high luminosity simulation results have produced a new organization of the hardware inside the FTK processor core.

قيم البحث

اقرأ أيضاً

Single muon triggers are crucial for the physics programmes at hadron collider experiments. To be sensitive to electroweak processes, single muon triggers with transverse momentum thresholds down to 20 GeV and dimuon triggers with even lower threshol ds are required. In order to keep the rates of these triggers at an acceptable level these triggers have to be highly selective, i.e. they must have small accidental trigger rates and sharp trigger turn-on curves. The muon systems of the LHC experiments and experiments at future colliders like FCC-hh will use two muon chamber systems for the muon trigger, fast trigger chambers like RPCs with coarse spatial resolution and much slower precision chambers like drift-tube chambers with high spatial resolution. The data of the trigger chambers are used to identify the bunch crossing in which the muon was created and for a rough momentum measurement while the precise measurements of the muon trajectory by the precision chambers are ideal for an accurate muon momentum measurement. A compact muon track finding algorithm is presented, where muon track candidates are reconstructed using a binning algorithm based on a 1D Hough Transform. The algorithm has been designed and implemented on a System-On-Chip device. A hardware demonstration using Xilinx Evaluation boards ZC706 has been set-up to prove the concept. The system has demonstrated the feasibility to reconstruct muon tracks with a good angular resolution, whilst satisfying latency constraints. The demonstrated track-reconstruction system, the chosen architecture, the achievements to date and future options for such a system will be discussed.
In this paper, we show how to adapt and deploy anomaly detection algorithms based on deep autoencoders, for the unsupervised detection of new physics signatures in the extremely challenging environment of a real-time event selection system at the Lar ge Hadron Collider (LHC). We demonstrate that new physics signatures can be enhanced by three orders of magnitude, while staying within the strict latency and resource constraints of a typical LHC event filtering system. This would allow for collecting datasets potentially enriched with high-purity contributions from new physics processes. Through per-layer, highly parallel implementations of network layers, support for autoencoder-specific losses on FPGAs and latent space based inference, we demonstrate that anomaly detection can be performed in as little as $80,$ns using less than 3% of the logic resources in the Xilinx Virtex VU9P FPGA. Opening the way to real-life applications of this idea during the next data-taking campaign of the LHC.
71 - Zhiyang Yuan 2019
The discovery of a SM Higgs boson at the LHC brought about great opportunity to investigate the feasibility of a Circular Electron Positron Collider (CEPC) operating at center-of-mass energy of $sim 240$ GeV, as a Higgs factory, with designed luminos ity of about $2times 10^{34}cm^{-2}s^{-1}$. The CEPC provides a much cleaner collision environment than the LHC, it is ideally suited for studying the properties of Higgs boson with greater precision. Another advantage of the CEPC over the LHC is that the Higgs boson can be detected through the recoil mass method by only reconstructing Z boson decay without examining the Higgs decays. In Concept Design Report(CDR), the circumference of CEPC is 100km, with two interaction points available for exploring different detector design scenarios and technologies. The baseline design of CEPC detector is an ILD-like concept, with a superconducting solenoid of 3.0 Tesla surrounding the inner silicon detector, TPC tracker detector and the calorimetry system. Time Projection Chambers (TPCs) have been extensively studied and used in many fields, especially in particle physics experiments, including STAR and ALICE. The TPC detector will operate in continuous mode on the circular machine. To fulfill the physics goals of the future circular collider and meet Higgs/$Z$ run, a TPC with excellent performance is required. We have proposed and investigated the ions controlling performance of a novel configuration detector module. The aim of this study is to suppress ion backflow ($IBF$) continually. In this paper, some update results of the feasibility and limitation on TPC detector technology R$&$D will be given using the hybrid gaseous detector module.
353 - T. G. White 2011
Using the simulation framework of the SiD detector to study the Higgs -> mumu decay channel showed a considerable gain in signal significance could be achieved through an increase in charged particle momentum resolution. However more detailed simulat ions of theZ -> mumu decay channel demonstrated that significant improvement in the resolution could not be achieved through an increase in tracker granularity. Conversely detector stability studies into missing/dead vertex layers using longer lived particles displayed an increase in track resolution. The existing 9.15 cm x 25 {mu}m silicon strip geometry was replaced with 100 x 100 micrometers silicon pixels improving secondary vertex resolution by a factor of 100. Study into highly collimated events through the use of dense jets showed that momentum resolution can be increased by a factor of 2, greatly improving signal significance but requiring a reduction in pixel size to 25 micrometers. An upgrade of the tracker granularity from the 9.15 cm strips to micrometer sized pixels requires an increase in number and complexity of sensor channels yet provides only a small improvement in the majority of linear collider physics.
The upgraded LHCb detector, due to start datataking in 2022, will have to process an average data rate of 4~TB/s in real time. Because LHCbs physics objectives require that the full detector information for every LHC bunch crossing is read out and ma de available for real-time processing, this bandwidth challenge is equivalent to that of the ATLAS and CMS HL-LHC software read-out, but deliverable five years earlier. Over the past six years, the LHCb collaboration has undertaken a bottom-up rewrite of its software infrastructure, pattern recognition, and selection algorithms to make them better able to efficiently exploit modern highly parallel computing architectures. We review the impact of this reoptimization on the energy efficiency of the real-time processing software and hardware which will be used for the upgrade of the LHCb detector. We also review the impact of the decision to adopt a hybrid computing architecture consisting of GPUs and CPUs for the real-time part of LHCbs future data processing. We discuss the implications of these results on how LHCbs real-time power requirements may evolve in the future, particularly in the context of a planned second upgrade of the detector.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا