ترغب بنشر مسار تعليمي؟ اضغط هنا

Unified System for Processing Real and Simulated Data in the ATLAS Experiment

95   0   0.0 ( 0 )
 نشر من قبل Alexandre Vaniachine
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The physics goals of the next Large Hadron Collider run include high precision tests of the Standard Model and searches for new physics. These goals require detailed comparison of data with computational models simulating the expected data behavior. To highlight the role which modeling and simulation plays in future scientific discovery, we report on use cases and experience with a unified system built to process both real and simulated data of growing volume and variety.



قيم البحث

اقرأ أيضاً

109 - A.V. Vaniachine 2013
The ever-increasing volumes of scientific data present new challenges for distributed computing and Grid technologies. The emerging Big Data revolution drives exploration in scientific fields including nanotechnology, astrophysics, high-energy physic s, biology and medicine. New initiatives are transforming data-driven scientific fields enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable six sigma production quality in petascale data processing on the Grid. The computing experience of the ATLAS and CMS experiments provides foundation for reliability engineering scaling up Grid technologies for data processing beyond the petascale.
123 - T. Golling 2011
The ATLAS experiment at the Large Hadron Collider has implemented a new system for recording information on detector status and data quality, and for transmitting this information to users performing physics analysis. This system revolves around the concept of defects, which are well-defined, fine-grained, unambiguous occurrences affecting the quality of recorded data. The motivation, implementation, and operation of this system is described.
The main purpose of the Baikal-GVD Data Quality Monitoring (DQM) system is to monitor the status of the detector and collected data. The system estimates quality of the recorded signals and performs the data validation. The DQM system is integrated w ith the Baikal-GVDs unified software framework (BARS) and operates in quasi-online manner. This allows us to react promptly and effectively to the changes in the telescope conditions.
This paper describes the design, implementation, and verification of a test-bed for determining the noise temperature of radio antennas operating between 400-800MHz. The requirements for this test-bed were driven by the HIRAX experiment, which uses a ntennas with embedded amplification, making system noise characterization difficult in the laboratory. The test-bed consists of two large cylindrical cavities, each containing radio-frequency (RF) absorber held at different temperatures (300K and 77K), allowing a measurement of system noise temperature through the well-known Y-factor method. The apparatus has been constructed at Yale, and over the course of the past year has undergone detailed verification measurements. To date, three preliminary noise temperature measurement sets have been conducted using the system, putting us on track to make the first noise temperature measurements of the HIRAX feed and perform the first analysis of feed repeatability.
A robust post processing technique is mandatory to analyse the coronagraphic high contrast imaging data. Angular Differential Imaging (ADI) and Principal Component Analysis (PCA) are the most used approaches to suppress the quasi-static structure in the Point Spread Function (PSF) in order to revealing planets at different separations from the host star. The focus of this work is to apply these two data reduction techniques to obtain the best limit detection for each coronagraphic setting that has been simulated for the SHARK-NIR, a coronagraphic camera that will be implemented at the Large Binocular Telescope (LBT). We investigated different seeing conditions ($0.4-1$) for stellar magnitude ranging from R=6 to R=14, with particular care in finding the best compromise between quasi-static speckle subtraction and planet detection.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا