ترغب بنشر مسار تعليمي؟ اضغط هنا

Minimizing privilege for building HPC containers

67   0   0.0 ( 0 )
 نشر من قبل Reid Priedhorsky
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Reid Priedhorsky




اسأل ChatGPT حول البحث

HPC centers face increasing demand for software flexibility, and there is growing consensus that Linux containers are a promising solution. However, existing container build solutions require root privileges and cannot be used directly on HPC resources. This limitation is compounded as supercomputer diversity expands and HPC architectures become more dissimilar from commodity computing resources. Our analysis suggests this problem can best be solved with low-privilege containers. We detail relevant Linux kernel features, propose a new taxonomy of container privilege, and compare two open-source implementations: mostly-unprivileged rootless Podman and fully-unprivileged Charliecloud. We demonstrate that low-privilege container build on HPC resources works now and will continue to improve, giving normal users a better workflow to securely and correctly build containers. Minimizing privilege in this way can improve HPC user and developer productivity as well as reduce support workload for exascale applications.



قيم البحث

اقرأ أيضاً

Each day the world inches closer to a climate catastrophe and a sustainability revolution. To avoid the former and achieve the latter we must transform our use of energy. Surprisingly, todays growing problem is that there is too much wind and solar p ower generation at the wrong times and in the wrong places. We argue for the construction of TerraWatt: a geographically-distributed, large-scale, zero-carbon compute infrastructure using renewable energy and older hardware. Delivering zero-carbon compute for general cloud workloads is challenging due to spatiotemporal power variability. We describe the systems challenges in using intermittent renewable power at scale to fuel such an older, decentralized compute infrastructure.
Containers are an emerging technology that hold promise for improving productivity and code portability in scientific computing. We examine Linux container technology for the distribution of a non-trivial scientific computing software stack and its e xecution on a spectrum of platforms from laptop computers through to high performance computing (HPC) systems. We show on a workstation and a leadership-class HPC system that when deployed appropriately there are no performance penalties running scientific programs inside containers. For Python code run on large parallel computers, the run time is reduced inside a container due to faster library imports. The software distribution approach and data that we present will help developers and users decide on whether container technology is appropriate for them. We also provide guidance for the vendors of HPC systems that rely on proprietary libraries for performance on what they can do to make containers work seamlessly and without performance penalty.
Data engineering is becoming an increasingly important part of scientific discoveries with the adoption of deep learning and machine learning. Data engineering deals with a variety of data formats, storage, data extraction, transformation, and data m ovements. One goal of data engineering is to transform data from original data to vector/matrix/tensor formats accepted by deep learning and machine learning applications. There are many structures such as tables, graphs, and trees to represent data in these data engineering phases. Among them, tables are a versatile and commonly used format to load and process data. In this paper, we present a distributed Python API based on table abstraction for representing and processing data. Unlike existing state-of-the-art data engineering tools written purely in Python, our solution adopts high performance compute kernels in C++, with an in-memory table representation with Cython-based Python bindings. In the core system, we use MPI for distributed memory computations with a data-parallel approach for processing large datasets in HPC clusters.
This paper considers the scheduling of jobs on distributed, heterogeneous High Performance Computing (HPC) clusters. Market-based approaches are known to be efficient for allocating limited resources to those that are most prepared to pay. This conte xt is applicable to an HPC or cloud computing scenario where the platform is overloaded. In this paper, jobs are composed of dependent tasks. Each job has a non-increasing time-value curve associated with it. Jobs are submitted to and scheduled by a market-clearing centralised auctioneer. This paper compares the performance of several policies for generating task bids. The aim investigated here is to maximise the value for the platform provider while minimising the number of jobs that do not complete (or starve). It is found that the Projected Value Remaining bidding policy gives the highest level of value under a typical overload situation, and gives the lowest number of starved tasks across the space of utilisation examined. It does this by attempting to capture the urgency of tasks in the queue. At high levels of overload, some alternative algorithms produce slightly higher value, but at the cost of a hugely higher number of starved workflows.
Many HPC applications suffer from a bottleneck in the shared caches, instruction execution units, I/O or memory bandwidth, even though the remaining resources may be underutilized. It is hard for developers and runtime systems to ensure that all crit ical resources are fully exploited by a single application, so an attractive technique for increasing HPC system utilization is to colocate multiple applications on the same server. When applications share critical resources, however, contention on shared resources may lead to reduced application performance. In this paper, we show that server efficiency can be improved by first modeling the expected performance degradation of colocated applications based on measured hardware performance counters, and then exploiting the model to determine an optimized mix of colocated applications. This paper presents a new intelligent resource manager and makes the following contributions: (1) a new machine learning model to predict the performance degradation of colocated applications based on hardware counters and (2) an intelligent scheduling scheme deployed on an existing resource manager to enable application co-scheduling with minimum performance degradation. Our results show that our approach achieves performance improvements of 7% (avg) and 12% (max) compared to the standard policy commonly used by existing job managers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا