ترغب بنشر مسار تعليمي؟ اضغط هنا

Atlas Data-Challenge 1 on NorduGrid

55   0   0.0 ( 0 )
 نشر من قبل Jakob Nielsen
 تاريخ النشر 2003
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testbed and tools, physicists from Denmark, Norway and Sweden were able to participate in the overall exercise starting in July 2002 and continuing through the rest of 2002 and the first part of 2003 using solely the NorduGrid environment. This allowed to distribute input data over a wide area, and rely on the NorduGrid resource discovery mechanism to find an optimal cluster for job submission. During the whole Data-Challenge 1, more than 2 TB of input data was processed and more than 2.5 TB of output data was produced by more than 4750 Grid jobs.



قيم البحث

اقرأ أيضاً

For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and interoperable services. To enhance the ATLAS DC1 production toolkit, we introduced and tes ted a Virtual Data services component. For each major data transformation step identified in the ATLAS data processing pipeline (event generation, detector simulation, background pile-up and digitization, etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data transformation knowledge and the validated parameters settings that must be provided before the data transformation invocation. To provide for local-remote transparency during DC1 production, the VDC database server delivered in a controlled way both the validated production parameters and the templated production recipes for thousands of the event generation and detector simulation jobs around the world, simplifying the production management solutions.
42 - P. Eerola , T. Ekelof , M. Ellert 2003
The NorduGrid project designed a Grid architecture with the primary goal to meet the requirements of production tasks of the LHC experiments. While it is meant to be a rather generic Grid system, it puts emphasis on batch processing suitable for prob lems encountered in High Energy Physics. The NorduGrid architecture implementation uses the globus{} as the foundation for various components, developed by the project. While introducing new services, the NorduGrid does not modify the Globus tools, such that the two can eventually co-exist. The NorduGrid topology is decentralized, avoiding a single point of failure. The NorduGrid architecture is thus a light-weight, non-invasive and dynamic one, while robust and scalable, capable of meeting most challenging tasks of High Energy Physics.
463 - A. Bonaldi , T. An , M. Bruggen 2020
As the largest radio telescope in the world, the Square Kilometre Array (SKA) will lead the next generation of radio astronomy. The feats of engineering required to construct the telescope array will be matched only by the techniques developed to exp loit the rich scientific value of the data. To drive forward the development of efficient and accurate analysis methods, we are designing a series of data challenges that will provide the scientific community with high-quality datasets for testing and evaluating new techniques. In this paper we present a description and results from the first such Science Data Challenge (SDC1). Based on SKA MID continuum simulated observations and covering three frequencies (560 MHz, 1400MHz and 9200 MHz) at three depths (8 h, 100 h and 1000 h), SDC1 asked participants to apply source detection, characterization and classification methods to simulated data. The challenge opened in November 2018, with nine teams submitting results by the deadline of April 2019. In this work we analyse the results for 8 of those teams, showcasing the variety of approaches that can be successfully used to find, characterise and classify sources in a deep, crowded field. The results also demonstrate the importance of building domain knowledge and expertise on this kind of analysis to obtain the best performance. As high-resolution observations begin revealing the true complexity of the sky, one of the outstanding challenges emerging from this analysis is the ability to deal with highly resolved and complex sources as effectively as the unresolved source population.
Constraints on dark matter from the first CMS and ATLAS SUSY searches are investigated. It is shown that within the minimal supergravity model, the early search for supersymmetry at the LHC has depleted a large portion of the signature space in dark matter direct detection experiments. In particular, the prospects for detecting signals of dark matter in the XENON and CDMS experiments are significantly affected in the low neutralino mass region. Here the relic density of dark matter typically arises from slepton coannihilations in the early universe. In contrast, it is found that the CMS and ATLAS analyses leave untouched the Higgs pole and the Hyperbolic Branch/Focus Point regions, which are now being probed by the most recent XENON results. Analysis is also done for supergravity models with non-universal soft breaking where one finds that a part of the dark matter signature space depleted by the CMS and ATLAS cuts in the minimal SUGRA case is repopulated. Thus, observation of dark matter in the LHC depleted region of minimal supergravity may indicate non-universalities in soft breaking.
Concise, accurate descriptions of physical systems through their conserved quantities abound in the natural sciences. In data science, however, current research often focuses on regression problems, without routinely incorporating additional assumpti ons about the system that generated the data. Here, we propose to explore a particular type of underlying structure in the data: Hamiltonian systems, where an energy is conserved. Given a collection of observations of such a Hamiltonian system over time, we extract phase space coordinates and a Hamiltonian function of them that acts as the generator of the system dynamics. The approach employs an autoencoder neural network component to estimate the transformation from observations to the phase space of a Hamiltonian system. An additional neural network component is used to approximate the Hamiltonian function on this constructed space, and the two components are trained jointly. As an alternative approach, we also demonstrate the use of Gaussian processes for the estimation of such a Hamiltonian. After two illustrative examples, we extract an underlying phase space as well as the generating Hamiltonian from a collection of movies of a pendulum. The approach is fully data-driven, and does not assume a particular form of the Hamiltonian function.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا