ترغب بنشر مسار تعليمي؟ اضغط هنا

Development of a data infrastructure for a global data and analysis center in astroparticle physics

79   0   0.0 ( 0 )
 نشر من قبل Victoria Tokareva A.
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Nowadays astroparticle physics faces a rapid data volume increase. Meanwhile, there are still challenges of testing the theoretical models for clarifying the origin of cosmic rays by applying a multi-messenger approach, machine learning and investigation of the phenomena related to the rare statistics in detecting incoming particles. The problems are related to the accurate data mapping and data management as well as to the distributed storage and high-performance data processing. In particular, one could be interested in employing such solutions in study of air-showers induced by ultra-high energy cosmic and gamma rays, testing new hypotheses of hadronic interaction or cross-calibration of different experiments. KASCADE (Karlsruhe, Germany) and TAIGA (Tunka valley, Russia) are experiments in the field of astroparticle physics, aiming at the detection of cosmic-ray air-showers, induced by the primaries in the energy range of about hundreds TeVs to hundreds PeVs. They are located at the same latitude and have an overlap in operation runs. These factors determine the interest in performing a joint analysis of these data. In the German-Russian Astroparticle Data Life Cycle Initiative (GRADLCI), modern technologies of the distributed data management are being employed for establishing a reliable open access to the experimental cosmic-ray physics data collected by KASCADE and the Tunka-133 setup of TAIGA.



قيم البحث

اقرأ أيضاً

Current and future astroparticle physics experiments are operated or are being built to observe highly energetic particles, high energy electromagnetic radiation and gravitational waves originating from all kinds of cosmic sources. The data volumes t aken by the experiments are large and expected to grow significantly during the coming years. This is a result of advanced research possibilities and improved detector technology. To cope with the substantially increasing data volumes of astroparticle physics projects it is important to understand the future needs for computing resources in this field. Providing these resources constitutes a larger fraction of the overall running costs of future infrastructures. This document presents the results of a survey made by APPEC with the help of computing experts of major projects and future initiatives in astroparticle physics, representatives of current Tier-1 and Tier-2 LHC computing centers, as well as specifically astroparticle physics computing centers, e.g. the Albert Einstein Institute for gravitational waves analysis in Hanover. In summary, the overall CPU usage and short-term disk and long-term (tape) storage space currently available for astroparticle physics projects computing services is of the order of one third of the central computing available for LHC data at the Tier-0 center at CERN. Till the end of the decade the requirements for computing resources are estimated to increase by a factor of 10. Furthermore, this document shall describe the diversity of astroparticle physics data handling and serve as a basis to estimate a distribution of computing and storage tasks among the major computing centers. (Abridged)
331 - A.Haungs , D.Kang , S.Schoo 2018
The `KASCADE Cosmic ray Data Centre is a web portal (url{https://kcdc.ikp.kit.edu}), where the data of the astroparticle physics experiment KASCADE-Grande are made available for the interested public. The KASCADE experiment was a large-area detector for the measurement of high-energy cosmic rays via the detection of extensive air showers. The multi-detector installations KASCADE and its extension KASCADE-Grande stopped the active data acquisition in 2013 of all its components end of 2012 after more than 20 years of data taking. In several updates since our first release in 2013 with KCDC we provide the public measured and reconstructed parameters of more than 433 million air showers. In addition, KCDC provides meta data information and documentation to enable a user outside the community of experts to perform their own data analysis. Simulation data from three different high energy interaction models have been made available as well as a compilation of measured and published spectra from various experiments. In addition, detailed educational examples shall encourage high-school students and early stage researchers to learn about astroparticle physics, cosmic radiation as well as the handling of Big Data and about the sustainable and public provision of scientific data.
670 - Dongwei Fan 2014
Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PIs propo sals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.
The SiPM is a novel solid state photodetector which can be operated in the single photon counting mode. It has excellent features, such as high quantum efficiency, good charge resolution, fast response, very compact size, high gain of 106, very low p ower consumption, immunity to the magnetic field and low bias voltage (30-70V). Drawbacks of this device currently are a large dark current, crosstalk between micropixels and relatively low sensitivity to UV and blue light. In the last few years, we have developed large size SiPMs (9 mm^2 and 25 mm^2) for applications in the imaging atmospheric Cherenkov telescopes, MAGIC and CTA, and in the space-borne fluorescence telescope EUSO. The current status of the SiPM development by MPI and MEPhI will be presented.
We present a high-performance, graphics processing unit (GPU)-based framework for the efficient analysis and visualization of (nearly) terabyte (TB)-sized 3-dimensional images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image: (1) volume rendering using an arbitrary transfer function at 7--10 frames per second; (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s; (3) evaluation of the image histogram in 4 s; and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching one teravoxel per second, and are 10--100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly, we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array pathfinder radiotelescopes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا