ترغب بنشر مسار تعليمي؟ اضغط هنا

Techniques and tools for measuring energy efficiency of scientific software applications

257   0   0.0 ( 0 )
 نشر من قبل Peter Elmer
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running on ARM and Intel architectures, and compare their power consumption and performance. We leverage several profiling tools (both in hardware and software) to extract different characteristics of the power use. We report the results of these measurements and the experience gained in developing a set of measurement techniques and profiling tools to accurately assess the power consumption for scientific workloads.

قيم البحث

اقرأ أيضاً

The quest to understand the fundamental building blocks of nature and their interactions is one of the oldest and most ambitious of human scientific endeavors. Facilities such as CERNs Large Hadron Collider (LHC) represent a huge step forward in this quest. The discovery of the Higgs boson, the observation of exceedingly rare decays of B mesons, and stringent constraints on many viable theories of physics beyond the Standard Model (SM) demonstrate the great scientific value of the LHC physics program. The next phase of this global scientific project will be the High-Luminosity LHC (HL-LHC) which will collect data starting circa 2026 and continue into the 2030s. The primary science goal is to search for physics beyond the SM and, should it be discovered, to study its details and implications. During the HL-LHC era, the ATLAS and CMS experiments will record circa 10 times as much data from 100 times as many collisions as in LHC Run 1. The NSF and the DOE are planning large investments in detector upgrades so the HL-LHC can operate in this high-rate environment. A commensurate investment in R&D for the software for acquiring, managing, processing and analyzing HL-LHC data will be critical to maximize the return-on-investment in the upgraded accelerator and detectors. The strategic plan presented in this report is the result of a conceptualization process carried out to explore how a potential Scientific Software Innovation Institute (S2I2) for High Energy Physics (HEP) can play a key role in meeting HL-LHC challenges.
125 - A. Latina 2021
A large multitude of scientific computing tools is available today. This article gives an overview of available tools and explains the main application fields. In addition basic principles of number representations in computing and the resulting trun cation errors are treated. The selection of tools is for those students, who work in the field of accelerator beam dynamics.
SND detector operates at the VEPP-2000 collider (BINP, Novosibirsk). To improve events selection for physical analysis and facilitate online detector control we developed new data quality monitoring (DQM) system. The system includes online and reproc ess control modules, automatic decision making scripts, interactive (web based) and program (python) access to various quality estimates. This access is implemented with node.js server with data in RDBMS MySQL. We describe here general system logics, its components and some implementation details.
Much of the current focus in high-performance computing is on multi-threading, multi-computing, and graphics processing unit (GPU) computing. However, vectorization and non-parallel optimization techniques, which can often be employed additionally, a re less frequently discussed. In this paper, we present an analysis of several optimizations done on both central processing unit (CPU) and GPU implementations of a particular computationally intensive Metropolis Monte Carlo algorithm. Explicit vectorization on the CPU and the equivalent, explicit memory coalescing, on the GPU are found to be critical to achieving good performance of this algorithm in both environments. The fully-optimized CPU version achieves a 9x to 12x speedup over the original CPU version, in addition to speedup from multi-threading. This is 2x faster than the fully-optimized GPU version.
Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to dev elop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The fin
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا