No Arabic abstract
The Scalable Systems Laboratory (SSL), part of the IRIS-HEP Software Institute, provides Institute participants and HEP software developers generally with a means to transition their R&D from conceptual toys to testbeds to production-scale prototypes. The SSL enables tooling, infrastructure, and services supporting the innovation of novel analysis and data architectures, development of software elements and tool-chains, reproducible functional and scalability testing of service components, and foundational systems R&D for accelerated services developed by the Institute. The SSL is constructed with a core team having expertise in scale testing and deployment of services across a wide range of cyberinfrastructure. The core team embeds and partners with other areas in the Institute, and with LHC and other HEP development and operations teams as appropriate, to define investigations and required service deployment patterns. We describe the approach and experiences with early application deployments, including analysis platforms and intelligent data delivery systems.
Swarms of autonomous devices are increasing in ubiquity and size. There are two main trains of thought for controlling devices in such swarms; centralized and distributed control. Centralized platforms achieve higher output quality but result in high network traffic and limited scalability, while decentralized systems are more scalable, but less sophisticated. In this work we present HiveMind, a centralized coordination control platform for IoT swarms that is both scalable and performant. HiveMind leverages a centralized cluster for all resource-intensive computation, deferring lightweight and time-critical operations, such as obstacle avoidance to the edge devices to reduce network traffic. HiveMind employs an event-driven serverless framework to run tasks on the cluster, guarantees fault tolerance both in the edge devices and serverless functions, and handles straggler tasks and underperforming devices. We evaluate HiveMind on a swarm of 16 programmable drones on two scenarios; searching for given items, and counting unique people in an area. We show that HiveMind achieves better performance and battery efficiency compared to fully centralized and fully decentralized platforms, while also handling load imbalances and failures gracefully, and allowing edge devices to leverage the cluster to collectively improve their output quality.
The increased use of micro-services to build web applications has spurred the rapid growth of Function-as-a-Service (FaaS) or serverless computing platforms. While FaaS simplifies provisioning and scaling for application developers, it introduces new challenges in resource management that need to be handled by the cloud provider. Our analysis of popular serverless workloads indicates that schedulers need to handle functions that are very short-lived, have unpredictable arrival patterns, and require expensive setup of sandboxes. The challenge of running a large number of such functions in a multi-tenant cluster makes existing scheduling frameworks unsuitable. We present Archipelago, a platform that enables low latency request execution in a multi-tenant serverless setting. Archipelago views each application as a DAG of functions, and every DAG in associated with a latency deadline. Archipelago achieves its per-DAG request latency goals by: (1) partitioning a given cluster into a number of smaller worker pools, and associating each pool with a semi-global scheduler (SGS), (2) using a latency-aware scheduler within each SGS along with proactive sandbox allocation to reduce overheads, and (3) using a load balancing layer to route requests for different DAGs to the appropriate SGS, and automatically scale the number of SGSs per DAG. Our testbed results show that Archipelago meets the latency deadline for more than 99% of realistic application request workloads, and reduces tail latencies by up to 36X compared to state-of-the-art serverless platforms.
Setting up the infrastructure to manage a software project can become a task as significant writing the software itself. A variety of useful open source tools are available, such as Web-based viewers for version control systems, wikis for collaborative discussions and bug-tracking systems, but their use in high-energy physics, outside large collaborations, is insubstantial. Understandably, physicists would rather do physics than configure project management tools. We introduce the CEDAR HepForge system, which provides a lightweight development environment for HEP software. Services available as part of HepForge include the above-mentioned tools as well as mailing lists, shell accounts, archiving of releases and low-maintenance Web space. HepForge also exists to promote best-practice software development methods and to provide a central repository for re-usable HEP software and phenomenology codes.
As distributed systems grow in scale and complexity, the need for flexible automation of systems management functions also grows. We outline a framework for building tools that provide distributed, scalable, declarative, modular, and continuous automation for distributed systems. We focus on four points of design: 1) a state-management approach that prescribes source-of-truth for configured and discovered system states; 2) a technique to solve the declarative unification problem for a class of automation problems, providing state convergence and modularity; 3) an eventual-consistency approach to state synchronization which provides automation at scale; 4) an event-driven architecture that provides always-on state enforcement. We describe the methodology, software architecture for the framework, and constraints for these techniques to apply to an automation problem. We overview a reference application built on this framework that provides state-aware system provisioning and node lifecycle management, highlighting key advantages. We conclude with a discussion of current and future applications.
In response to the soaring needs of human mobility data, especially during disaster events such as the COVID-19 pandemic, and the associated big data challenges, we develop a scalable online platform for extracting, analyzing, and sharing multi-source multi-scale human mobility flows. Within the platform, an origin-destination-time (ODT) data model is proposed to work with scalable query engines to handle heterogenous mobility data in large volumes with extensive spatial coverage, which allows for efficient extraction, query, and aggregation of billion-level origin-destination (OD) flows in parallel at the server-side. An interactive spatial web portal, ODT Flow Explorer, is developed to allow users to explore multi-source mobility datasets with user-defined spatiotemporal scales. To promote reproducibility and replicability, we further develop ODT Flow REST APIs that provide researchers with the flexibility to access the data programmatically via workflows, codes, and programs. Demonstrations are provided to illustrate the potential of the APIs integrating with scientific workflows and with the Jupyter Notebook environment. We believe the platform coupled with the derived multi-scale mobility data can assist human mobility monitoring and analysis during disaster events such as the ongoing COVID-19 pandemic and benefit both scientific communities and the general public in understanding human mobility dynamics.