No Arabic abstract
Serverless computing has emerged as a promising alternative to infrastructure- (IaaS) and platform-as-a-service (PaaS)cloud platforms for applications with ample parallelism and intermittent activity. Serverless promises greater resource elasticity, significant cost savings, and simplified application deployment. All major cloud providers, including Amazon, Google, and Microsoft, have introduced serverless to their public cloud offerings. For serverless to reach its potential, there is a pressing need for programming frameworks that abstract the deployment complexity away from the user. This includes simplifying the process of writing applications for serverless environments, automating task and data partitioning, and handling scheduling and fault tolerance. We present Ripple, a programming framework designed to specifically take applications written for single-machine execution and allow them to take advantage of the task parallelism of serverless. Ripple exposes a simple interface that users can leverage to express the high-level dataflow of a wide spectrum of applications, including machine learning (ML) analytics, genomics, and proteomics. Ripple also automates resource provisioning, meeting user-defined QoS targets, and handles fault tolerance by eagerly detecting straggler tasks. We port Ripple over AWS Lambda and show that, across a set of diverse applications, it provides an expressive and generalizable programming framework that simplifies running data-parallel applications on serverless, and can improve performance by up to 80x compared to IaaS/PaaS clouds for similar costs.
As distributed systems grow in scale and complexity, the need for flexible automation of systems management functions also grows. We outline a framework for building tools that provide distributed, scalable, declarative, modular, and continuous automation for distributed systems. We focus on four points of design: 1) a state-management approach that prescribes source-of-truth for configured and discovered system states; 2) a technique to solve the declarative unification problem for a class of automation problems, providing state convergence and modularity; 3) an eventual-consistency approach to state synchronization which provides automation at scale; 4) an event-driven architecture that provides always-on state enforcement. We describe the methodology, software architecture for the framework, and constraints for these techniques to apply to an automation problem. We overview a reference application built on this framework that provides state-aware system provisioning and node lifecycle management, highlighting key advantages. We conclude with a discussion of current and future applications.
The proliferation of camera-enabled devices and large video repositories has led to a diverse set of video analytics applications. These applications rely on video pipelines, represented as DAGs of operations, to transform videos, process extracted metadata, and answer questions like, Is this intersection congested? The latency and resource efficiency of pipelines can be optimized using configurable knobs for each operation (e.g., sampling rate, batch size, or type of hardware used). However, determining efficient configurations is challenging because (a) the configuration search space is exponentially large, and (b) the optimal configuration depends on users desired latency and cost targets, (c) input video contents may exercise different paths in the DAG and produce a variable amount intermediate results. Existing video analytics and processing systems leave it to the users to manually configure operations and select hardware resources. We present Llama: a heterogeneous and serverless framework for auto-tuning video pipelines. Given an end-to-end latency target, Llama optimizes for cost efficiency by (a) calculating a latency target for each operation invocation, and (b) dynamically running a cost-based optimizer to assign configurations across heterogeneous hardware that best meet the calculated per-invocation latency target. This makes the problem of auto-tuning large video pipelines tractable and allows us to handle input-dependent behavior, conditional branches in the DAG, and execution variability. We describe the algorithms in Llama and evaluate it on a cloud platform using serverless CPU and GPU resources. We show that compared to state-of-the-art cluster and serverless video analytics and processing systems, Llama achieves 7.8x lower latency and 16x cost reduction on average.
The industry and academia have proposed many distributed graph processing systems. However, the existing systems are not friendly enough for users like data analysts and algorithm engineers. On the one hand, the programing models and interfaces differ a lot in the existing systems, leading to high learning costs and program migration costs. On the other hand, these graph processing systems are tightly bound to the underlying distributed computing platforms, requiring users to be familiar with distributed computing. To improve the usability of distributed graph processing, we propose a unified distributed graph programming framework UniGPS. Firstly, we propose a unified cross-platform graph programming model VCProg for UniGPS. VCProg hides details of distributed computing from users. It is compatible with the popular graph programming models Pregel, GAS, and Push-Pull. VCProg programs can be executed by compatible distributed graph processing systems without modification, reducing the learning overheads of users. Secondly, UniGPS supports Python as the programming language. We propose an interprocess-communication-based execution environment isolation mechanism to enable Java/C++-based systems to call user-defined methods written in Python. The experimental results show that UniGPS enables users to process big graphs beyond the memory capacity of a single machine without sacrificing usability. UniGPS shows near-linear data scalability and machine scalability.
This paper describes our experiences creating Tornado: a practical and efficient heterogeneous programming framework for managed languages. The novel aspect of Tornado is that it turns the programming of heterogeneous systems from an activity predominantly based on a priori knowledge into one based on a posteriori knowledge. Alternatively put, it simply means developers do not need to overcomplicate their code by catering for all possible eventualities. Instead, Tornado provides the ability to specialize each application for a specific system in situ which avoids the need for it to be pre-configured by the developer. To enable this, Tornado employs a sophisticated runtime system that can dynamically configure all aspects of the application - from selecting which parallelization scheme to apply to specifying which accelerators to use. By using this ability, the end-user, and not the developer, can transparently make use of any available multi-/many-core processor or hardware accelerator. To showcase the impact of Tornado, we implement a real-world computer vision application and deploy it across nine accelerators without having to modify the source code or even explicitly re-compile the application. Using dynamic configuration, we show that our implementation can achieve up to 124 frames per second (FPS) - up to 166x speedup over the reference implementation. Finally, our implementation is always within 21% of a hand-written OpenCL version but avoids much of the programming tedium.
Serverless computing has grown in popularity in recent years, with an increasing number of applications being built on Functions-as-a-Service (FaaS) platforms. By default, FaaS platforms support retry-based fault tolerance, but this is insufficient for programs that modify shared state, as they can unwittingly persist partial sets of updates in case of failures. To address this challenge, we would like atomic visibility of the updates made by a FaaS application. In this paper, we present AFT, an atomic fault tolerance shim for serverless applications. AFT interposes between a commodity FaaS platform and storage engine and ensures atomic visibility of updates by enforcing the read atomic isolation guarantee. AFT supports new protocols to guarantee read atomic isolation in the serverless setting. We demonstrate that aft introduces minimal overhead relative to existing storage engines and scales smoothly to thousands of requests per second, while preventing a significant number of consistency anomalies.