Do you want to publish a course? Click here

PaSh: Light-touch Data-Parallel Shell Processing

68   0   0.0 ( 0 )
 Added by Konstantinos Kallas
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper presents {scshape PaSh}, a system for parallelizing POSIX shell scripts. Given a script, {scshape PaSh} converts it to a dataflow graph, performs a series of semantics-preserving program transformations that expose parallelism, and then converts the dataflow graph back into a script -- one that adds POSIX constructs to explicitly guide parallelism coupled with {scshape PaSh}-provided {scshape Unix}-aware runtime primitives for addressing performance- and correctness-related issues. A lightweight annotation language allows command developers to express key parallelizability properties about their commands. An accompanying parallelizability study of POSIX and GNU commands -- two large and commonly used groups -- guides the annotation language and optimized aggregator library that {scshape PaSh} uses. Finally, {scshape PaSh}s {scshape PaSh}s extensive evaluation over 44 unmodified {scshape Unix} scripts shows significant speedups ($0.89$--$61.1times$, avg: $6.7times$) stemming from the combination of its program transformations and runtime primitives.



rate research

Read More

High-level programming languages such as Python are increasingly used to provide intuitive interfaces to libraries written in lower-level languages and for assembling applications from various components. This migration towards orchestration rather than implementation, coupled with the growing need for parallel computing (e.g., due to big data and the end of Moores law), necessitates rethinking how parallelism is expressed in programs. Here, we present Parsl, a parallel scripting library that augments Python with simple, scalable, and flexible constructs for encoding parallelism. These constructs allow Parsl to construct a dynamic dependency graph of components that it can then execute efficiently on one or many processors. Parsl is designed for scalability, with an extensible set of executors tailored to different use cases, such as low-latency, high-throughput, or extreme-scale execution. We show, via experiments on the Blue Waters supercomputer, that Parsl executors can allow Python scripts to execute components with as little as 5 ms of overhead, scale to more than 250 000 workers across more than 8000 nodes, and process upward of 1200 tasks per second. Other Parsl features simplify the construction and execution of composite programs by supporting elastic provisioning and scaling of infrastructure, fault-tolerant execution, and integrated wide-area data management. We show that these capabilities satisfy the needs of many-task, interactive, online, and machine learning applications in fields such as biology, cosmology, and materials science.
In recent years the computing landscape has seen an in- creasing shift towards specialized accelerators. Field pro- grammable gate arrays (FPGAs) are particularly promising as they offer significant performance and energy improvements compared to CPUs for a wide class of applications and are far more flexible than fixed-function ASICs. However, FPGAs are difficult to program. Traditional programming models for reconfigurable logic use low-level hardware description languages like Verilog and VHDL, which have none of the pro- ductivity features of modern software development languages but produce very efficient designs, and low-level software lan- guages like C and OpenCL coupled with high-level synthesis (HLS) tools that typically produce designs that are far less efficient. Functional languages with parallel patterns are a better fit for hardware generation because they both provide high-level abstractions to programmers with little experience in hard- ware design and avoid many of the problems faced when gen- erating hardware from imperative languages. In this paper, we identify two optimizations that are important when using par- allel patterns to generate hardware: tiling and metapipelining. We present a general representation of tiled parallel patterns, and provide rules for automatically tiling patterns and gen- erating metapipelines. We demonstrate experimentally that these optimizations result in speedups up to 40x on a set of benchmarks from the data analytics domain.
Constraint Handling Rules (CHR) is a declarative rule-based formalism and language. Concurrency is inherent as rules can be applied to subsets of constraints in parallel. Parallel implementations of CHR, be it in software, be it in hardware, use different execution strategies for parallel execution of CHR programs depending on the implementation language. In this report, our goal is to analyze parallel execution of CHR programs from a more general conceptual perspective. We want to experimentally see what is possible when CHR programs are automatically parallelized. For this purpose, a sequential simulation of parallel CHR execution is used to systematically encode different parallel execution strategies. In exhaustive experiments on some typical examples from the literature, parallel and sequential execution can be compared to each other. The number of processors can be bounded or unbounded for a more theoretical analysis. As a result, some preliminary but indicative observations on the influence of the execution strategy can be made for the different problem classes and in general.
Data-intensive applications are becoming commonplace in all science disciplines. They are comprised of a rich set of sub-domains such as data engineering, deep learning, and machine learning. These applications are built around efficient data abstractions and operators that suit the applications of different domains. Often lack of a clear definition of data structures and operators in the field has led to other implementations that do not work well together. The HPTMT architecture that we proposed recently, identifies a set of data structures, operators, and an execution model for creating rich data applications that links all aspects of data engineering and data science together efficiently. This paper elaborates and illustrates this architecture using an end-to-end application with deep learning and data engineering parts working together.
443 - Shen Li , Yanli Zhao , Rohan Varma 2020
This paper presents the design, implementation, and evaluation of the PyTorch distributed data parallel module. PyTorch is a widely-adopted scientific computing package used in deep learning research and applications. Recent advances in deep learning argue for the value of large datasets and large models, which necessitates the ability to scale out model training to more computational resources. Data parallelism has emerged as a popular solution for distributed training thanks to its straightforward principle and broad applicability. In general, the technique of distributed data parallelism replicates the model on every computational resource to generate gradients independently and then communicates those gradients at each iteration to keep model replicas consistent. Despite the conceptual simplicity of the technique, the subtle dependencies between computation and communication make it non-trivial to optimize the distributed training efficiency. As of v1.5, PyTorch natively provides several techniques to accelerate distributed data parallel, including bucketing gradients, overlapping computation with communication, and skipping gradient synchronization. Evaluations show that, when configured appropriately, the PyTorch distributed data parallel module attains near-linear scalability using 256 GPUs.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا