No Arabic abstract
In this paper, we show how to adapt and deploy anomaly detection algorithms based on deep autoencoders, for the unsupervised detection of new physics signatures in the extremely challenging environment of a real-time event selection system at the Large Hadron Collider (LHC). We demonstrate that new physics signatures can be enhanced by three orders of magnitude, while staying within the strict latency and resource constraints of a typical LHC event filtering system. This would allow for collecting datasets potentially enriched with high-purity contributions from new physics processes. Through per-layer, highly parallel implementations of network layers, support for autoencoder-specific losses on FPGAs and latent space based inference, we demonstrate that anomaly detection can be performed in as little as $80,$ns using less than 3% of the logic resources in the Xilinx Virtex VU9P FPGA. Opening the way to real-life applications of this idea during the next data-taking campaign of the LHC.
Finding tracks downstream of the magnet at the earliest LHCb trigger level is not part of the baseline plan of the upgrade trigger, on account of the significant CPU time required to execute the search. Many long-lived particles, such as $K^0_S$ and strange baryons, decay after the vertex track detector, so that their reconstruction efficiency is limited. We present a study of the performance of a future innovative real-time tracking system based on FPGAs, developed within a R&D effort in the context of the LHCb Upgrade Ib (LHC Run~4), dedicated to the reconstruction of the particles downstream of the magnet in the forward tracking detector (Scintillating Fibre Tracker), that is capable of processing events at the full LHC collision rate of 30 MHz.
The Large Hadron Collider presents an unprecedented opportunity to probe the realm of new physics in the TeV region and shed light on some of the core unresolved issues of particle physics. These include the nature of electroweak symmetry breaking, the origin of mass, the possible constituent of cold dark matter, new sources of CP violation needed to explain the baryon excess in the universe, the possible existence of extra gauge groups and extra matter, and importantly the path Nature chooses to resolve the hierarchy problem - is it supersymmetry or extra dimensions. Many models of new physics beyond the standard model contain a hidden sector which can be probed at the LHC. Additionally, the LHC will be a top factory and accurate measurements of the properties of the top and its rare decays will provide a window to new physics. Further, the LHC could shed light on the origin of neutralino masses if the new physics associated with their generation lies in the TeV region. Finally, the LHC is also a laboratory to test the hypothesis of TeV scale strings and D-brane models. An overview of these possibilities is presented in the spirit that it will serve as a companion to the Technical Design Reports (TDRs) by the particle detector groups ATLAS and CMS to facilitate the test of the new theoretical ideas at the LHC. Which of these ideas stands the test of the LHC data will govern the course of particle physics in the subsequent decades.
We investigate new physics scenarios where systems comprised of a single top quark accompanied by missing transverse energy, dubbed monotops, can be produced at the LHC. Following a simplified model approach, we describe all possible monotop production modes via an effective theory and estimate the sensitivity of the LHC, assuming 20 fb$^{-1}$ of collisions at a center-of-mass energy of 8 TeV, to the observation of a monotop state. Considering both leptonic and hadronic top quark decays, we show that large fractions of the parameter space are reachable and that new physics particles with masses ranging up to 1.5 TeV can leave hints within the 2012 LHC dataset, assuming moderate new physics coupling strengths.
On-detector digital electronics in High-Energy Physics experiments is increasingly being implemented by means of SRAM-based FPGA, due to their capabilities of reconfiguration, real-time processing and multi-gigabit data transfer. Radiation-induced single event upsets in the configuration hinder the correct operation, since they may alter the programmed routing paths and logic functions. In most trigger and data acquisition systems, data from several front-end modules are concentrated into a single board, which then transmits data to back-end electronics for acquisition and triggering. Since the front-end modules are identical, they host identical FPGAs, which are programmed with the same bitstream. In this work, we present a novel scrubber capable of correcting radiation-induced soft-errors in the configuration of SRAM-based FPGAs by majority voting across different modules. We show an application of this system to the read-out electronics of the Aerogel Ring Imaging CHerenkov (ARICH) subdetector of the Belle2 experiment at SuperKEKB of the KEK laboratory (Tsukuba, Japan). We discuss the architecture of the system and its implementation in a Virtex-5 LX50T FPGA, in the concentrator board, for correcting the configuration of up to six Spartan-6 LX45 FPGAs, on pertaining front-end modules. We discuss results from fault-injection and neutron irradiation tests at the TRIGA reactor of the Jozef Stefan Institute (Ljubljana, Slovenia) and we compare the performance of our solution to the Xilinx Soft Error Mitigation controller.
We present a new calculation of the energy distribution of high-energy neutrinos from the decay of charm and bottom hadrons produced at the Large Hadron Collider (LHC). In the kinematical region of very forward rapidities, heavy-flavor production and decay is a source of tau neutrinos that leads to thousands of { charged-current} tau neutrino events in a 1 m long, 1 m radius lead neutrino detector at a distance of 480 m from the interaction region. In our computation, next-to-leading order QCD radiative corrections are accounted for in the production cross-sections. Non-perturbative intrinsic-$k_T$ effects are approximated by a simple phenomenological model introducing a Gaussian $k_T$-smearing of the parton distribution functions, which might also mimic perturbative effects due to multiple initial-state soft-gluon emissions. The transition from partonic to hadronic states is described by phenomenological fragmentation functions. To study the effect of various input parameters, theoretical predictions for $D_s^pm$ production are compared with LHCb data on double-differential cross-sections in transverse momentum and rapidity. The uncertainties related to the choice of the input parameter values, ultimately affecting the predictions of the tau neutrino event distributions, are discussed. We consider a 3+1 neutrino mixing scenario to illustrate the potential for a neutrino experiment to constrain the 3+1 parameter space using tau neutrinos and antineutrinos. We find large theoretical uncertainties in the predictions of the neutrino fluxes in the far-forward region. Untangling the effects of tau neutrino oscillations into sterile neutrinos and distinguishing a 3+1 scenario from the standard scenario with three active neutrino flavours, will be challenging due to the large theoretical uncertainties from QCD.