No Arabic abstract
The NorduGrid project designed a Grid architecture with the primary goal to meet the requirements of production tasks of the LHC experiments. While it is meant to be a rather generic Grid system, it puts emphasis on batch processing suitable for problems encountered in High Energy Physics. The NorduGrid architecture implementation uses the globus{} as the foundation for various components, developed by the project. While introducing new services, the NorduGrid does not modify the Globus tools, such that the two can eventually co-exist. The NorduGrid topology is decentralized, avoiding a single point of failure. The NorduGrid architecture is thus a light-weight, non-invasive and dynamic one, while robust and scalable, capable of meeting most challenging tasks of High Energy Physics.
The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testbed and tools, physicists from Denmark, Norway and Sweden were able to participate in the overall exercise starting in July 2002 and continuing through the rest of 2002 and the first part of 2003 using solely the NorduGrid environment. This allowed to distribute input data over a wide area, and rely on the NorduGrid resource discovery mechanism to find an optimal cluster for job submission. During the whole Data-Challenge 1, more than 2 TB of input data was processed and more than 2.5 TB of output data was produced by more than 4750 Grid jobs.
The high performance requirements at the European Spallation Source have been driving the technological advances on the neutron detector front. Now more than ever is it important to optimize the design of detectors and instruments, to fully exploit the ESS source brilliance. Most of the simulation tools the neutron scattering community has at their disposal target the instrument optimization until the sample position, with little focus on detectors. The ESS Detector Group has extended the capabilities of existing detector simulation tools to bridge this gap. An extensive software framework has been developed, enabling efficient and collaborative developments of required simulations and analyses -- based on the use of the Geant4 Monte Carlo toolkit, but with extended physics capabilities where relevant (like for Bragg diffraction of thermal neutrons in crystals). Furthermore, the MCPL (Monte Carlo Particle Lists) particle data exchange file format, currently supported for the primary Monte Carlo tools of the community (McStas, Geant4 and MCNP), facilitates the integration of detector simulations with existing simulations of instruments using these software packages. These means offer a powerful set of tools to tailor the detector and instrument design to the instrument application.
The development of a package for the management of physics data is described: its design, implementation and computational benchmarks. This package improves the data management tools originally developed for Geant4 physics models based on the EADL, EEDL and EPDL97 data libraries. The implementation exploits recent evolutions of the C++ libraries appearing in the C++0x draft, which are intended for inclusion in the next C++ ISO Standard. The new tools improve the computational performance of physics data management.
Common and community software packages, such as ROOT, Geant4 and event generators have been a key part of the LHCs success so far and continued development and optimisation will be critical in the future. The challenges are driven by an ambitious physics programme, notably the LHC accelerator upgrade to high-luminosity, HL-LHC, and the corresponding detector upgrades of ATLAS and CMS. In this document we address the issues for software that is used in multiple experiments (usually even more widely than ATLAS and CMS) and maintained by teams of developers who are either not linked to a particular experiment or who contribute to common software within the context of their experiment activity. We also give space to general considerations for future software and projects that tackle upcoming challenges, no matter who writes it, which is an area where community convergence on best practice is extremely useful.
We present a highly scalable 3D fully-coupled Earth & ocean model of earthquake rupture and tsunami generation. We model seismic, acoustic and surface gravity wave propagation in elastic (Earth) and acoustic (ocean) materials sourced by physics-based non-linear earthquake dynamic rupture. Complicated geometries, including high-resolution bathymetry, coastlines and segmented earthquake faults are discretized by adaptive unstructured tetrahedral meshes. A Discontinuous Galerkin discretization with ADER local time-stepping (ADER-DG) yields petascale computational efficiency and high-order accuracy in time and space. We compare the 3D fully-coupled approach to a benchmark problem for 3D-2D linked models that use 2D shallow-water modeling. We present a large-scale fully-coupled model of the 2018 Sulawesi events that links the dynamics from supershear earthquake faulting to elastic and acoustic waves in Earth and ocean to tsunami gravity wave propagation in the narrow Palu Bay. And we demonstrate scalability and performance of the MPI+OpenMP parallelization on three petascale supercomputers.