No Arabic abstract
The SiPM is a novel solid state photodetector which can be operated in the single photon counting mode. It has excellent features, such as high quantum efficiency, good charge resolution, fast response, very compact size, high gain of 106, very low power consumption, immunity to the magnetic field and low bias voltage (30-70V). Drawbacks of this device currently are a large dark current, crosstalk between micropixels and relatively low sensitivity to UV and blue light. In the last few years, we have developed large size SiPMs (9 mm^2 and 25 mm^2) for applications in the imaging atmospheric Cherenkov telescopes, MAGIC and CTA, and in the space-borne fluorescence telescope EUSO. The current status of the SiPM development by MPI and MEPhI will be presented.
The Alpha Magnetic Spectrometer (AMS02) experiment will be installed in 2009 on the International Space Station (ISS) for an operational period of at least three years. The purpose of AMS02 experiment is to perform accurate, high statistics, long duration measurements in space of charged cosmic rays in rigidity range from 1 GV to 3 TV and of high energy photons up to few hundred of GeV. In this work we will discuss the experimental details and the physics capabilities of AMS02 on ISS.
Nowadays astroparticle physics faces a rapid data volume increase. Meanwhile, there are still challenges of testing the theoretical models for clarifying the origin of cosmic rays by applying a multi-messenger approach, machine learning and investigation of the phenomena related to the rare statistics in detecting incoming particles. The problems are related to the accurate data mapping and data management as well as to the distributed storage and high-performance data processing. In particular, one could be interested in employing such solutions in study of air-showers induced by ultra-high energy cosmic and gamma rays, testing new hypotheses of hadronic interaction or cross-calibration of different experiments. KASCADE (Karlsruhe, Germany) and TAIGA (Tunka valley, Russia) are experiments in the field of astroparticle physics, aiming at the detection of cosmic-ray air-showers, induced by the primaries in the energy range of about hundreds TeVs to hundreds PeVs. They are located at the same latitude and have an overlap in operation runs. These factors determine the interest in performing a joint analysis of these data. In the German-Russian Astroparticle Data Life Cycle Initiative (GRADLCI), modern technologies of the distributed data management are being employed for establishing a reliable open access to the experimental cosmic-ray physics data collected by KASCADE and the Tunka-133 setup of TAIGA.
Precision measurements of charged cosmic rays have recently been carried out by space-born (e.g. AMS-02), or ground experiments (e.g. HESS). These measured data are important for the studies of astro-physical phenomena, including supernova remnants, cosmic ray propagation, solar physics and dark matter. Those scenarios usually contain a number of free parameters that need to be adjusted by observed data. Some techniques, such as Markov Chain Monte Carlo and MultiNest, are developed in order to solve the above problem. However, it is usually required a computing farm to apply those tools. In this paper, a genetic algorithm for finding the optimum parameters for cosmic ray injection and propagation is presented. We find that this algorithm gives us the same best fit results as the Markov Chain Monte Carlo but consuming less computing power by nearly 2 orders of magnitudes.
The modern astrophysics is moving towards the enlarging of experiments and combining the channels for detecting the highest energy processes in the Universe. To obtain reliable data, the experiments should operate within several decades, which means that the data will be obtained and analyzed by several generations of physicists. Thus, for the stability of the experiments, it is necessary to properly maintain not only the data life cycle, but also the human aspects, for example, attracting, learning and continuity. To this end, an educational and outreach resource has been deployed in the framework of German-Russian Astroparticle Data Life Cycle Initiative (GRADLCI).
The ability to construct, use, and revise models is a crucial experimental physics skill. Many existing frameworks describe modeling in science education at introductory levels. However, most have limited applicability to the context of upper-division physics lab courses or experimental physics. Here, we discuss the Modeling Framework for Experimental Physics, a theoretical framework tailored to labs and experimentation. A key feature of the Framework is recursive interaction between models and apparatus. Models are revised to account for new evidence produced by apparatus, and apparatus are revised to better align with the simplifying assumptions of models. Another key feature is the distinction between the physical phenomenon being investigated and the measurement equipment used to conduct the investigation. Models of physical systems facilitate explanation or prediction of phenomena, whereas models of measurement systems facilitate interpretation of data. We describe the Framework, provide a chronological history of its development, and summarize its applications to research and curricular design. Ultimately, we argue that the Modeling Framework is a theoretically sound and well-tested tool that is applicable to multiple physics domains and research purposes. In particular, it is useful for characterizing students approaches to experimentation, designing or evaluating curricula for lab courses, and developing instruments to assess students experimental modeling skills.