No Arabic abstract
One of the biggest challenges in the High-Luminosity LHC (HL- LHC) era will be the significantly increased data size to be recorded and analyzed from the collisions at the ATLAS and CMS experiments. ServiceX is a software R&D project in the area of Data Organization, Management and Access of the IRIS- HEP to investigate new computational models for the HL- LHC era. ServiceX is an experiment-agnostic service to enable on-demand data delivery specifically tailored for nearly-interactive vectorized analyses. It is capable of retrieving data from grid sites, on-the-fly data transformation, and delivering user-selected data in a variety of different formats. New features will be presented that make the service ready for public use. An ongoing effort to integrate ServiceX with a popular statistical analysis framework in ATLAS will be described with an emphasis of a practical implementation of ServiceX into the physics analysis pipeline.
Upgrades to the LHCb computing infrastructure in the first long shutdown of the LHC have allowed for high quality decay information to be calculated by the software trigger making a separate offline event reconstruction unnecessary. Furthermore, the storage space of the triggered candidate is an order of magnitude smaller than the entire raw event that would otherwise need to be persisted. Tesla, following the LHCb renowned physicist naming convention, is an application designed to process the information calculated by the trigger, with the resulting output used to directly perform physics measurements.
The experiment data generated by the EAST device is getting larger and larger, and it is necessary to monitor the MDSplus data storage server on EAST. In order to facilitate the management of users on the MDSplus server, a real-time monitoring log analysis system is needed. The data processing framework adopted by this log analysis system is the Spark Streaming framework in Spark ecosphere, whose real-time streaming data is derived from MDSplus logs. The framework also makes use of key technologies such as log monitoring, aggregation and distribution with framework likes Flume and Kafka which makes it possible for MDSplus mass log data processing power. The system can process tens of millions of unprocessed MDSplus log information at a second level, then model the log information and display it on the web. This report introduces the design and implementation of the overall architecture of real time data access log analysis system based on spark. Experimental results show that the system is proved to be with steady and reliable performance and has an important application value to the management of fusion experiment data. The system has been designed and will be adopted in the next campaign and the system details will be given in the paper.
The JSNS$^{2}$ (J-PARC Sterile Neutrino Search at J-PARC Spallation Neutron Source) experiment aims to search for neutrino oscillations over a 24 m short baseline at J-PARC. The JSNS$^{2}$ inner detector is filled with 17 tons of gadolinium(Gd)-loaded liquid scintillator (LS) with an additional 31 tons of unloaded LS in the intermediate $gamma$-catcher and an optically separated outer veto volumes. A total of 120 10-inch photomultiplier tubes observe the scintillating optical photons and each analog waveform is stored with the flash analog-to-digital converters. We present details of the data acquisition, processing, and data quality monitoring system. We also present two different trigger logics which are developed for the beam and self-trigger.
The ATLAS experiment at the Large Hadron Collider has implemented a new system for recording information on detector status and data quality, and for transmitting this information to users performing physics analysis. This system revolves around the concept of defects, which are well-defined, fine-grained, unambiguous occurrences affecting the quality of recorded data. The motivation, implementation, and operation of this system is described.
The Large Hadron Collider beauty (LHCb) detector is designed to detect decays of b- and c- hadrons for the study of CP violation and rare decays. At the end of the LHC Run 2, many of the LHCb measurements remained statistically dominated. In order to increase the trigger yield for purely hadronic channels, the hardware trigger will be removed, and the detector will be read out at 40 MHz. This, in combination with the five-fold increase in luminosity, requires radical changes to LHCbs electronics, and, in some cases, the replacement of entire sub-detectors with state-of-the-art detector technologies. The Vertex Locator (VELO) surrounding the interaction region is used to reconstruct the collision points (primary vertices) and decay vertices of long-lived particles (secondary vertices). The upgraded VELO will be composed of 52 modules placed along the beam axis divided into two retractable halves. The modules will each be equipped with 4 silicon hybrid pixel tiles, each read out by 3 VeloPix ASICs. The total output data rate anticipated for the whole detector will be around 1.6 Tbit/s. The highest occupancy ASICs will have pixel hit rates of approximately 900 Mhit/s, with the corresponding output data rate of 15 Gbit/s. The LHCb upgrade detector will be the first detector to read out at the full LHC rate of 40 MHz. The VELO upgrade will utilize the latest detector technologies to read out at this rate while maintaining the required radiation-hard profile and minimizing the detector material.