Do you want to publish a course? Click here

The full detector simulation for the Atlas experiment: status and outlook

52   0   0.0 ( 0 )
 Added by Rimoldi Adele
 Publication date 2003
  fields Physics
and research's language is English




Ask ChatGPT about the research

The simulation of the ATLAS detector is a major challenge, given the complexity of the detector and the demanding environment of the LHC. The apparatus, one of the biggest and most complex ever designed, requires a detailed, flexible and, if possible, fast simulation which is needed already today to deal with questions related to design optimization, to issues raised by staging scenarios, and of course to enable detailed physics studies to lay the basis for the first physics discoveries. Scalability and robustness stand out as the most critical issues that are to be faced in the implementation of such a simulation. In this paper we present the status of the present simulation and the adopted solutions in terms of speed optimization, centralization of services, framework facilities and persistency solutions. Emphasis is put on the global performance when the different detector components are collected together in a full and detailed simulation. The reference tool adopted is Geant4.

rate research

Read More

209 - D.S. Parno 2013
The KATRIN experiment, presently under construction in Karlsruhe, Germany, will improve on previous laboratory limits on the neutrino mass by a factor of ten. KATRIN will use a high-activity, gaseous T2 source and a very high-resolution spectrometer to measure the shape of the high-energy tail of the tritium-decay beta spectrum. The shape measurement will also be sensitive to new physics, including sterile neutrinos and Lorentz violation. This report summarizes recent progress in the experiment.
The EDELWEISS Dark Matter search uses low-temperature Ge detectors with heat and ionisation read- out to identify nuclear recoils induced by elastic collisions with WIMPs from the galactic halo. Results from the operation of 70 g and 320 g Ge detectors in the low-background environment of the Modane Underground Laboratory (LSM) are presented.
Rapidly applying the effects of detector response to physics objects (e.g. electrons, muons, showers of particles) is essential in high energy physics. Currently available tools for the transformation from truth-level physics objects to reconstructed detector-level physics objects involve manually defining resolution functions. These resolution functions are typically derived in bins of variables that are correlated with the resolution (e.g. pseudorapidity and transverse momentum). This process is time consuming, requires manual updates when detector conditions change, and can miss important correlations. Machine learning offers a way to automate the process of building these truth-to-reconstructed object transformations and can capture complex correlation for any given set of input variables. Such machine learning algorithms, with sufficient optimization, could have a wide range of applications: improving phenomenological studies by using a better detector representation, allowing for more efficient production of Geant4 simulation by only simulating events within an interesting part of phase space, and studies on future experimental sensitivity to new physics.
The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.
96 - N. J. Ayres , G. Ban , G. Bison 2019
Psychological bias towards, or away from, a prior measurement or a theory prediction is an intrinsic threat to any data analysis. While various methods can be used to avoid the bias, e.g. actively not looking at the result, only data blinding is a traceable and thus trustworthy method to circumvent the bias and to convince a public audience that there is not even an accidental psychological bias. Data blinding is nowadays a standard practice in particle physics, but it is particularly difficult for experiments searching for the neutron electric dipole moment, as several cross measurements, in particular of the magnetic field, create a self-consistent network into which it is hard to inject a fake signal. We present an algorithm that modifies the data without influencing the experiment. Results of an automated analysis of the data are used to change the recorded spin state of a few neutrons of each measurement cycle. The flexible algorithm is applied twice to the data, to provide different data to various analysis teams. This gives us the option to sequentially apply various blinding offsets for separate analysis steps with independent teams. The subtle modification of the data allows us to modify the algorithm and to produce a re-blinded data set without revealing the blinding secret. The method was designed for the 2015/2016 measurement campaign of the nEDM experiment at the Paul Scherrer Institute. However, it can be re-used with minor modification for the follow-up experiment n2EDM, and may be suitable for comparable efforts.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا