Do you want to publish a course? Click here

Implementation of high-performance, sub-microsecond deep neural networks on FPGAs for trigger applications

98   0   0.0 ( 0 )
 Added by Christian Schmitt
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Artificial neural networks are already widely used for physics analysis, but there are only few applications within low-level hardware triggers, and typically only with small networks. Modern high-end FPGAs offer Tera-scale arithmetic performance, and thereby provide a significant amount of operations per data set even for MHz-range data rates. We present a bottom-up approach of implementing typical neural network layers, in which we took both the special constraints that come from high-performance trigger systems, such as the ATLAS hardware trigger at the LHC, as well as an efficient implementation into account. By specifically designing each layer type to match our requirements, we could develop a framework that reaches 90 to 100% processing efficiency for large layers, requires only few extra resources for data flow and controlling, and offers latencies in the range of only tens to hundreds of nanoseconds for entire (deep) networks. Additionally, a toolkit was built around these optimized layer implementations, which facilitates the creation of the FPGA implementation of a trained NN model.



rate research

Read More

119 - P. Gadow , O. Kortner , S. Kortner 2015
Highly selective first-level triggers are essential to exploit the full physics potential of the ATLAS experiment at High-Luminosity LHC (HL-LHC). The concept for a new muon trigger stage using the precision monitored drift tube (MDT) chambers to significantly improve the selectivity of the first-level muon trigger is presented. It is based on fast track reconstruction in all three layers of the existing MDT chambers, made possible by an extension of the first-level trigger latency to six microseconds and a new MDT read-out electronics required for the higher overall trigger rates at the HL-LHC. Data from $pp$-collisions at $sqrt{s} = 8,mathrm{TeV}$ is used to study the minimal muon transverse momentum resolution that can be obtained using the MDT precision chambers, and to estimate the resolution and efficiency of the MDT-based trigger. A resolution of better than $4.1%$ is found in all sectors under study. With this resolution, a first-level trigger with a threshold of $18,mathrm{GeV}$ becomes fully efficient for muons with a transverse momentum above $24,mathrm{GeV}$ in the barrel, and above $20,mathrm{GeV}$ in the end-cap region.
Time-of-flight (tof) techniques are standard techniques in high energy physics to determine particles propagation directions. Since particles velocities are generally close to c, the speed of light, and detectors typical dimensions at the meter level, the state-of-the-art tof techniques should reach sub-nanosecond timing resolution. Among the various techniques already available, the recently developed ring oscillator TDC ones, implemented in low cost FPGA, feature a very interesting figure of merit since a very good timing performance may be achieved with limited processing ressources. This issue is relevant for applications where unmanned sensors should have the lowest possible power consumption. Actually this article describes in details the application of this kind of tof technique to muon tomography of geological bodies. Muon tomography aims at measuring density variations and absolute densities through the detection of atmospheric muons fluxs attenuation, due to the presence of matter. When the measured fluxes become very low, an identified source of noise comes from backwards propagating particles hitting the detector in a direction pointing to the geological body. The separation between through-going and backward-going particles, on the basis of the tof information is therefore a key parameter for the tomography analysis and subsequent previsions.
We studied the performance of the Convolutional Neural Network (CNN) for energy regression in a finely 3D-segmented calorimeter simulated by GEANT4. A CNN trained solely on a pure sample of pions achieved substantial improvement in the energy resolution for both single pions and jets over the conventional approaches. It maintained good performance for electron and photon reconstruction. We also used the Graph Neural Network (GNN) with edge convolution to assess the importance of timing information in the shower development for improved energy reconstruction. In this paper, we present the comparison of several reconstruction techniques: a simple energy sum, a dual-readout analog, a CNN, and a GNN with timing information.
129 - A. Hamilton 2010
The ATLAS trigger has been used very successfully to collect collision data during 2009 and 2010 LHC running at centre of mass energies of 900 GeV, 2.36 TeV, and 7 TeV. This paper presents the ongoing work to commission the ATLAS trigger with proton collisions, including an overview of the performance of the trigger based on extensive online running. We describe how the trigger has evolved with increasing LHC luminosity and give a brief overview of plans for forthcoming LHC running.
75 - Jeff Heaton 2020
Deep learning is a group of exciting new technologies for neural networks. Through a combination of advanced training techniques and neural network architectural components, it is now possible to create neural networks that can handle tabular data, images, text, and audio as both input and output. Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain. This course will introduce the student to classic neural network structures, Convolution Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU), General Adversarial Networks (GAN), and reinforcement learning. Application of these architectures to computer vision, time series, security, natural language processing (NLP), and data generation will be covered. High-Performance Computing (HPC) aspects will demonstrate how deep learning can be leveraged both on graphical processing units (GPUs), as well as grids. Focus is primarily upon the application of deep learning to problems, with some introduction to mathematical foundations. Readers will use the Python programming language to implement deep learning using Google TensorFlow and Keras. It is not necessary to know Python prior to this book; however, familiarity with at least one programming language is assumed.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا