ترغب بنشر مسار تعليمي؟ اضغط هنا

Shall numerical astrophysics step into the era of Exascale computing?

126   0   0.0 ( 0 )
 نشر من قبل David Goz Dr.
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

High performance computing numerical simulations are today one of the more effective instruments to implement and study new theoretical models, and they are mandatory during the preparatory phase and operational phase of any scientific experiment. New challenges in Cosmology and Astrophysics will require a large number of new extremely computationally intensive simulations to investigate physical processes at different scales. Moreover, the size and complexity of the new generation of observational facilities also implies a new generation of high performance data reduction and analysis tools pushing toward the use of Exascale computing capabilities. Exascale supercomputers cannot be produced today. We discuss the major technological challenges in the design, development and use of such computing capabilities and we will report on the progresses that has been made in the last years in Europe, in particular in the framework of the ExaNeSt European funded project. We also discuss the impact of this new computing resources on the numerical codes in Astronomy and Astrophysics.

قيم البحث

اقرأ أيضاً

The ExaNeSt and EuroExa H2020 EU-funded projects aim to design and develop an exascale ready computing platform prototype based on low-energy-consumption ARM64 cores and FPGA accelerators. We participate in the application-driven design of the hardwa re solutions and prototype validation. To carry on this work we are using, among others, Hy-Nbody, a state-of-the-art direct N-body code. Core algorithms of Hy-Nbody have been improved in such a way to increasingly fit them to the exascale target platform. Waiting for the ExaNest prototype release, we are performing tests and code tuning operations on an ARM64 SoC facility: a SLURM managed HPC cluster based on 64-bit ARMv8 Cortex-A72/Cortex-A53 core design and powered by a Mali-T864 embedded GPU. In parallel, we are porting a kernel of Hy-Nbody on FPGA aiming to test and compare the performance-per-watt of our algorithms on different platforms. In this paper we describe how we re-engineered the application and we show first results on ARM SoC.
Astrophysical explosions such as supernovae are fascinating events that require sophisticated algorithms and substantial computational power to model. Castro and MAESTROeX are nuclear astrophysics codes that simulate thermonuclear fusion in the conte xt of supernovae and X-ray bursts. Examining these nuclear burning processes using high resolution simulations is critical for understanding how these astrophysical explosions occur. In this paper we describe the changes that have been made to these codes to transform them from standard MPI + OpenMP codes targeted at petascale CPU-based systems into a form compatible with the pre-exascale systems now online and the exascale systems coming soon. We then discuss what new science is possible to run on systems such as Summit and Perlmutter that could not have been achieved on the previous generation of supercomputers.
The architecture of Exascale computing facilities, which involves millions of heterogeneous processing units, will deeply impact on scientific applications. Future astrophysical HPC applications must be designed to make such computing systems exploit able. The ExaNeSt H2020 EU-funded project aims to design and develop an exascale ready prototype based on low-energy-consumption ARM64 cores and FPGA accelerators. We participate to the design of the platform and to the validation of the prototype with cosmological N-body and hydrodynamical codes suited to perform large-scale, high-resolution numerical simulations of cosmic structures formation and evolution. We discuss our activities on astrophysical applications to take advantage of the underlying architecture.
Experience suggests that structural issues in how institutional Astrophysics approaches data-driven science and the development of discovery technology may be hampering the communitys ability to respond effectively to a rapidly changing environment i n which increasingly complex, heterogeneous datasets are challenging our existing information infrastructure and traditional approaches to analysis. We stand at the confluence of a new epoch of multimessenger science, remote co-location of data and processing power and new observing strategies based on miniaturized spacecraft. Significant effort will be required by the community to adapt to this rapidly evolving range of possible discovery moduses. In the suggested creation of a new Astrophysics element, Advanced Astrophysics Discovery Technology, we offer an affirmative solution that places the visibility of discovery technologies at a level that we suggest is fully commensurate with their importance to the future of the field.
The availability of new Cloud Platform offered by Google motivated us to propose nine Proof of Concepts (PoC) aiming to demonstrated and test the capabilities of the platform in the context of scientifically-driven tasks and requirements. We review t he status of our initiative by illustrating 3 out of 9 successfully closed PoC that we implemented on Google Cloud Platform. In particular, we illustrate a cloud architecture for deployment of scientific software as microservice coupling Google Compute Engine with Docker and Pub/Sub to dispatch heavily parallel simulations. We detail also an experiment for HPC based simulation and workflow executions of data reduction pipelines (for the TNG-GIANO-B spectrograph) deployed on GCP. We compare and contrast our experience with on-site facilities comparing advantages and disadvantages both in terms of total cost of ownership and reached performances.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا