ترغب بنشر مسار تعليمي؟ اضغط هنا

A parallel-computing algorithm for high-energy physics particle tracking and decoding using GPU architectures

66   0   0.0 ( 0 )
 نشر من قبل Dorothea vom Bruch
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Real-time data processing is one of the central processes of particle physics experiments which require large computing resources. The LHCb (Large Hadron Collider beauty) experiment will be upgraded to cope with a particle bunch collision rate of 30 million times per second, producing $10^9$ particles/s. 40 Tbits/s need to be processed in real-time to make filtering decisions to store data. This poses a computing challenge that requires exploration of modern hardware and software solutions. We present Compass, a particle tracking algorithm and a parallel raw input decoding optimised for GPUs. It is designed for highly parallel architectures, data-oriented and optimised for fast and localised data access. Our algorithm is configurable, and we explore the trade-off in computing and physics performance of various configurations. A CPU implementation that delivers the same physics performance as our GPU implementation is presented. We discuss the achieved physics performance and validate it with Monte Carlo simulated data. We show a computing performance analysis comparing consumer and server grade GPUs, and a CPU. We show the feasibility of using a full GPU decoding and particle tracking algorithm for high-throughput particle trajectories reconstruction, where our algorithm improves the throughput up to 7.4$times$ compared to the LHCb baseline.



قيم البحث

اقرأ أيضاً

73 - Steven Lantz 2020
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is determining the trajectory of charged particles during event reconstruction. Algorithms used at the LHC today rely on Kalman filte ring, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filter-based methods for highly parallel, many-core SIMD architectures that are now prevalent in high-performance hardware. In this paper, we discuss the design and performance of the improved tracking algorithm, referred to as mkFit. A key piece of the algorithm is the Matriplex library, containing dedicated code to optimally vectorize operations on small matrices. The physics performance of the mkFit algorithm is comparable to the nominal CMS tracking algorithm when reconstructing tracks from simulated proton-proton collisions within the CMS detector. We study the scaling of the algorithm as a function of the parallel resources utilized and find large speedups both from vectorization and multi-threading. mkFit achieves a speedup of a factor of 6 compared to the nominal algorithm when run in a single-threaded application within the CMS software framework.
132 - Sophie Berkman 2020
Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. In liquid argon time projection chambers (TPCs) the charged particles from neutrino interactions produce ionization electrons which drift in an electric field towards a series of collection wires, and the signal on the wires is used to reconstruct the interaction. The MicroBooNE detector currently collecting data at Fermilab has 8000 wires, and planned future experiments like DUNE will have 100 times more, which means that the time required to reconstruct an event will scale accordingly. Modernization of liquid argon TPC reconstruction code, including vectorization, parallelization and code portability to GPUs, will help to mitigate these challenges. The liquid argon TPC hit finding algorithm within the texttt{LArSoft}xspace framework used across multiple experiments has been vectorized and parallelized. This increases the speed of the algorithm on the order of ten times within a standalone version on Intel architectures. This new version has been incorporated back into texttt{LArSoft}xspace so that it can be generally used. These methods will also be applied to other low-level reconstruction algorithms of the wire signals such as the deconvolution. The applications and performance of this modernized liquid argon TPC wire reconstruction will be presented.
CMOS pixel sensors (CPS) represent a novel technological approach to building charged particle detectors. CMOS processes allow to integrate a sensing volume and readout electronics in a single silicon die allowing to build sensors with a small pixel pitch ($sim 20 mu m$) and low material budget ($sim 0.2-0.3% X_0$) per layer. These characteristics make CPS an attractive option for vertexing and tracking systems of high energy physics experiments. Moreover, thanks to the mass production industrial CMOS processes used for the manufacturing of CPS the fabrication construction cost can be significantly reduced in comparison to more standard semiconductor technologies. However, the attainable performance level of the CPS in terms of radiation hardness and readout speed is mostly determined by the fabrication parameters of the CMOS processes available on the market rather than by the CPS intrinsic potential. The permanent evolution of commercial CMOS processes towards smaller feature sizes and high resistivity epitaxial layers leads to the better radiation hardness and allows the implementation of accelerated readout circuits. The TowerJazz $0.18 mu m$ CMOS process being one of the most relevant examples recently became of interest for several future detector projects. The most imminent of these project is an upgrade of the Inner Tracking System (ITS) of the ALICE detector at LHC. It will be followed by the Micro-Vertex Detector (MVD) of the CBM experiment at FAIR. Other experiments like ILD consider CPS as one of the viable options for flavour tagging and tracking sub-systems.
406 - Xiangyang Ju 2020
Pattern recognition problems in high energy physics are notably different from traditional machine learning applications in computer vision. Reconstruction algorithms identify and measure the kinematic properties of particles produced in high energy collisions and recorded with complex detector systems. Two critical applications are the reconstruction of charged particle trajectories in tracking detectors and the reconstruction of particle showers in calorimeters. These two problems have unique challenges and characteristics, but both have high dimensionality, high degree of sparsity, and complex geometric layouts. Graph Neural Networks (GNNs) are a relatively new class of deep learning architectures which can deal with such data effectively, allowing scientists to incorporate domain knowledge in a graph structure and learn powerful representations leveraging that structure to identify patterns of interest. In this work we demonstrate the applicability of GNNs to these two diverse particle reconstruction problems.
The Timepix particle tracking telescope has been developed as part of the LHCb VELO Upgrade project, supported by the Medipix Collaboration and the AIDA framework. It is a primary piece of infrastructure for the VELO Upgrade project and is being used for the development of new sensors and front end technologies for several upcoming LHC trackers and vertexing systems. The telescope is designed around the dual capability of the Timepix ASICs to provide information about either the deposited charge or the timing information from tracks traversing the 14 x 14mm matrix of 55 x 55 um pixels. The rate of reconstructed tracks available is optimised by taking advantage of the shutter driven readout architecture of the Timepix chip, operated with existing readout systems. Results of tests conducted in the SPS North Area beam facility at CERN show that the telescope typically provides reconstructed track rates during the beam spills of between 3.5 and 7.5 kHz, depending on beam conditions. The tracks are time stamped with 1 ns resolution with an efficiency of above 98% and provide a pointing resolution at the centre of the telescope of 1.6 um . By dropping the time stamping requirement the rate can be increased to 15 kHz, at the expense of a small increase in background. The telescope infrastructure provides CO2 cooling and a flexible mechanical interface to the device under test, and has been used for a wide range of measurements during the 2011-2012 data taking campaigns.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا