ترغب بنشر مسار تعليمي؟ اضغط هنا

Off-line data quality monitoring for the GERDA experiment

70   0   0.0 ( 0 )
 نشر من قبل Paolo Zavarise
 تاريخ النشر 2011
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

GERDA is an experiment searching for the neutrinoless {beta}{beta} decay of Ge-76. The experiment uses an array of high-purity germanium detectors, enriched in Ge-76, directly immersed in liquid argon. GERDA recently started the physics data taking using eight enriched coaxial detectors. The status of the experiment has to be closely monitored in order to promptly identify possible instabilities or problems. The on-line slow control system is complemented by a regular off-line monitoring of data quality. This ensures that data are qualified to be used in the physics analysis and allows to reject data sets which do not meet the minimum quality standards. The off-line data monitoring is entirely performed within the software framework GELATIO. In addition, a relational database, complemented by a web-based interface, was developed to support the off-line monitoring and to automatically provide information to daily assess data quality. The concept and the performance of the off-line monitoring tools were tested and validated during the one-year commissioning phase.


قيم البحث

اقرأ أيضاً

96 - N. J. Ayres , G. Ban , G. Bison 2019
Psychological bias towards, or away from, a prior measurement or a theory prediction is an intrinsic threat to any data analysis. While various methods can be used to avoid the bias, e.g. actively not looking at the result, only data blinding is a tr aceable and thus trustworthy method to circumvent the bias and to convince a public audience that there is not even an accidental psychological bias. Data blinding is nowadays a standard practice in particle physics, but it is particularly difficult for experiments searching for the neutron electric dipole moment, as several cross measurements, in particular of the magnetic field, create a self-consistent network into which it is hard to inject a fake signal. We present an algorithm that modifies the data without influencing the experiment. Results of an automated analysis of the data are used to change the recorded spin state of a few neutrons of each measurement cycle. The flexible algorithm is applied twice to the data, to provide different data to various analysis teams. This gives us the option to sequentially apply various blinding offsets for separate analysis steps with independent teams. The subtle modification of the data allows us to modify the algorithm and to produce a re-blinded data set without revealing the blinding secret. The method was designed for the 2015/2016 measurement campaign of the nEDM experiment at the Paul Scherrer Institute. However, it can be re-used with minor modification for the follow-up experiment n2EDM, and may be suitable for comparable efforts.
The quality of the incoming experimental data has a significant importance for both analysis and running the experiment. The main point of the Baikal-GVD DQM system is to monitor the status of the detector and obtained data on the run-by-run based an alysis. It should be fast enough to be able to provide analysis results to detector shifter and for participation in the global multi-messaging system.
The main purpose of the Baikal-GVD Data Quality Monitoring (DQM) system is to monitor the status of the detector and collected data. The system estimates quality of the recorded signals and performs the data validation. The DQM system is integrated w ith the Baikal-GVDs unified software framework (BARS) and operates in quasi-online manner. This allows us to react promptly and effectively to the changes in the telescope conditions.
The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics ana lysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.
51 - A. Rimoldi , A. DellAcqua 2003
The simulation of the ATLAS detector is a major challenge, given the complexity of the detector and the demanding environment of the LHC. The apparatus, one of the biggest and most complex ever designed, requires a detailed, flexible and, if possible , fast simulation which is needed already today to deal with questions related to design optimization, to issues raised by staging scenarios, and of course to enable detailed physics studies to lay the basis for the first physics discoveries. Scalability and robustness stand out as the most critical issues that are to be faced in the implementation of such a simulation. In this paper we present the status of the present simulation and the adopted solutions in terms of speed optimization, centralization of services, framework facilities and persistency solutions. Emphasis is put on the global performance when the different detector components are collected together in a full and detailed simulation. The reference tool adopted is Geant4.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا