Do you want to publish a course? Click here

The DZERO DAQ/Online Monitoring System and Applications, Including an Active Auto-recovery Tool

55   0   0.0 ( 0 )
 Added by Gordon Watts
 Publication date 2003
  fields Physics
and research's language is English




Ask ChatGPT about the research

The DZERO experiment, located at the Fermi National Accelerator Laboratory, has recently started the Run 2 physics program. The detector upgrade included a new Data Acquisition/Level 3 Trigger system. Part of the design for the DAQ/Trigger system was a new monitoring infrastructure. The monitoring was designed to satisfy real-time requirements with 1-second resolution as well as non-real-time data. It was also designed to handle a large number of displays without putting undue load on the sources of monitoring information. The resulting protocol is based on XML, is easily extensible, and has spawned a large number of displays, clients, and other applications. It is also one of the few sources of detector performance available outside the Online Systems security wall. A tool, based on this system, which provides for auto-recovery of DAQ errors, has been designed. This talk will include a description of the DZERO DAQ/Online monitor server, based on the ACE framework, the protocol, the auto-recovery tool, and several of the unique displays which include an ORACLE-based archiver and numerous GUIs.



rate research

Read More

The DZERO experiment located at Fermilab has recently started RunII with an upgraded detector. The RunII physics program requires the Data Acquisition to readout the detector at a rate of 1 KHz. Events fragments, totaling 250 KB, are readout from approximately 60 front end crates and sent to a particular farm node for Level 3 Trigger processing. A scalable system, capable of complex event routing, has been designed and implemented based on commodity components: VMIC 7750 Single Board Computers for readout, a Cisco 6509 switch for data flow, and close to 100 Linux-based PCs for high-level event filtering.
The FragmentatiOn Of Target (FOOT) experiment aims to provide precise nuclear cross-section measurements for two different fields: hadrontherapy and radio-protection in space. The main reason is the important role the nuclear fragmentation process plays in both fields, where the health risks caused by radiation are very similar and mainly attributable to the fragmentation process. The FOOT experiment has been developed in such a way that the experimental setup is easily movable and fits the space limitations of the experimental and treatment rooms available in hadrontherapy treatment centers, where most of the data takings are carried out. The Trigger and Data Acquisition system needs to follow the same criteria and it should work in different laboratories and in different conditions. It has been designed to acquire the largest sample size with high accuracy in a controlled and online-monitored environment. The data collected are processed in real-time for quality assessment and are available to the DAQ crew and detector experts during data taking.
A new $mu$TCA DAQ system was introduced in CANDLES experiment with SpaceWire-to-GigabitEthernet (SpaceWire-GigabitEthernet) network for data readout and Flash Analog-to-Digital Converters (FADCs). With SpaceWire-GigabitEthernet, we can construct a flexible DAQ network with multi-path access to FADCs by using off-the-shelf computers. FADCs are equipped 8 event buffers, which act as de-randomizer to detect sequential decays from the background. SpaceWire-GigabitEthernet has high latency (about 100 $mu$sec) due to long turnaround time, while GigabitEthernet has high throughput. To reduce dead-time, we developed the DAQ system with 4 crate-parallel (modules in crates are read in parallel) reading threads. As a result, the readout time is reduced by 4 times: 40 msec down to 10 msec. With improved performance, it is expected to achieve higher background suppression for CANDLES experiment. Moreover, for energy calibration, event-parallel reading process (events are read in parallel) is also introduced to reduce measurement time. With 2 event-parallel reading processes, the data rate is increased 2 times.
The status of the CMS RPC Gas Gain Monitoring (GGM) system developed at the Frascati Laboratory of INFN (Istituto Nazionale di Fisica Nucleare) is reported on. The GGM system is a cosmic ray telescope based on small RPC detectors operated with the same gas mixture used by the CMS RPC system. The GGM gain and efficiency are continuously monitored on-line, thus providing a fast and accurate determination of any shift in working point conditions. The construction details and the first result of GGM commissioning are described.
227 - B. Acar , G. Adamov , C. Adloff 2020
The CMS experiment at the CERN LHC will be upgraded to accommodate the 5-fold increase in the instantaneous luminosity expected at the High-Luminosity LHC (HL-LHC). Concomitant with this increase will be an increase in the number of interactions in each bunch crossing and a significant increase in the total ionising dose and fluence. One part of this upgrade is the replacement of the current endcap calorimeters with a high granularity sampling calorimeter equipped with silicon sensors, designed to manage the high collision rates. As part of the development of this calorimeter, a series of beam tests have been conducted with different sampling configurations using prototype segmented silicon detectors. In the most recent of these tests, conducted in late 2018 at the CERN SPS, the performance of a prototype calorimeter equipped with ${approx}12,000rm{~channels}$ of silicon sensors was studied with beams of high-energy electrons, pions and muons. This paper describes the custom-built scalable data acquisition system that was built with readily available FPGA mezzanines and low-cost Raspberry PI computers.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا