No Arabic abstract
The CMS experiment at the CERN LHC will be upgraded to accommodate the 5-fold increase in the instantaneous luminosity expected at the High-Luminosity LHC (HL-LHC). Concomitant with this increase will be an increase in the number of interactions in each bunch crossing and a significant increase in the total ionising dose and fluence. One part of this upgrade is the replacement of the current endcap calorimeters with a high granularity sampling calorimeter equipped with silicon sensors, designed to manage the high collision rates. As part of the development of this calorimeter, a series of beam tests have been conducted with different sampling configurations using prototype segmented silicon detectors. In the most recent of these tests, conducted in late 2018 at the CERN SPS, the performance of a prototype calorimeter equipped with ${approx}12,000rm{~channels}$ of silicon sensors was studied with beams of high-energy electrons, pions and muons. This paper describes the custom-built scalable data acquisition system that was built with readily available FPGA mezzanines and low-cost Raspberry PI computers.
A large prototype of 1.3m3 was designed and built as a demonstrator of the semi-digital hadronic calorimeter (SDHCAL) concept proposed for the future ILC experiments. The prototype is a sampling hadronic calorimeter of 48 units. Each unit is built of an active layer made of 1m2 Glass Resistive Plate Chamber(GRPC) detector placed inside a cassette whose walls are made of stainless steel. The cassette contains also the electronics used to read out the GRPC detector. The lateral granularity of the active layer is provided by the electronics pick-up pads of 1cm2 each. The cassettes are inserted into a self-supporting mechanical structure built also of stainless steel plates which, with the cassettes walls, play the role of the absorber. The prototype was designed to be very compact and important efforts were made to minimize the number of services cables to optimize the efficiency of the Particle Flow Algorithm techniques to be used in the future ILC experiments. The different components of the SDHCAL prototype were studied individually and strict criteria were applied for the final selection of these components. Basic calibration procedures were performed after the prototype assembling. The prototype is the first of a series of new-generation detectors equipped with a power-pulsing mode intended to reduce the power consumption of this highly granular detector. A dedicated acquisition system was developed to deal with the output of more than 440000 electronics channels in both trigger and triggerless modes. After its completion in 2011, the prototype was commissioned using cosmic rays and particles beams at CERN.
In view of a possible extension of the forward CMS muon detector system and future LHC luminosity upgrades, Micro-Pattern Gas Detectors (MPGDs) are an appealing technology. They can simultaneously provide precision tracking and fast trigger information, as well as sufficiently fine segmentation to cope with high particle rates in the high-eta region at LHC and its future upgrades. We report on the design and construction of a full-size prototype for the CMS endcap system, the largest Triple-GEM detector built to-date. We present details on the 3D modeling of the detector geometry, the implementation of the readout strips and electronics, and the detector assembly procedure.
We report on the performance of a monitoring system for a prototype calorimeter for the BTeV experiment that uses Lead Tungstate crystals coupled with photomultiplier tubes. The tests were carried out at the 70 GeV accelerator complex at Protvino, Russia.
Gas Electron Multipliers (GEM) are an interesting technology under consideration for the future upgrade of the forward region of the CMS muon system, specifically in the $1.6<| eta |<2.4$ endcap region. With a sufficiently fine segmentation GEMs can provide precision tracking as well as fast trigger information. The main objective is to contribute to the improvement of the CMS muon trigger. The construction of large-area GEM detectors is challenging both from the technological and production aspects. In view of the CMS upgrade we have designed and built the largest full-size Triple-GEM muon detector, which is able to meet the stringent requirements given the hostile environment at the high-luminosity LHC. Measurements were performed during several test beam campaigns at the CERN SPS in 2010 and 2011. The main issues under study are efficiency, spatial resolution and timing performance with different inter-electrode gap configurations and gas mixtures. In this paper results of the performance of the prototypes at the beam tests will be discussed.
We present the 3DGAN for the simulation of a future high granularity calorimeter output as three-dimensional images. We prove the efficacy of Generative Adversarial Networks (GANs) for generating scientific data while retaining a high level of accuracy for diverse metrics across a large range of input variables. We demonstrate a successful application of the transfer learning concept: we train the network to simulate showers for electrons from a reduced range of primary energies, we then train further for a five times larger range (the model could not train for the larger range directly). The same concept is extended to generate showers for other particles (photons and neutral pions) depositing most of their energies in electromagnetic interactions. In addition, the generation of charged pion showers is also explored, a more accurate effort would require additional data from other detectors not included in the scope of the current work. Our further contribution is a demonstration of using GAN-generated data for a practical application. We train a third-party network using GAN-generated data and prove that the response is similar to a network trained with data from the Monte Carlo simulation. The showers generated by GAN present accuracy within $10%$ of Monte Carlo for a diverse range of physics features, with three orders of magnitude speedup. The speedup for both the training and inference can be further enhanced by distributed training.