Do you want to publish a course? Click here

A Serverless Cloud-Fog Platform for DNN-Based Video Analytics with Incremental Learning

577   0   0.0 ( 0 )
 Added by Huaizheng Zhang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

DNN-based video analytics have empowered many new applications (e.g., automated retail). Meanwhile, the proliferation of fog devices provides developers with more design options to improve performance and save cost. To the best of our knowledge, this paper presents the first serverless system that takes full advantage of the client-fog-cloud synergy to better serve the DNN-based video analytics. Specifically, the system aims to achieve two goals: 1) Provide the optimal analytics results under the constraints of lower bandwidth usage and shorter round-trip time (RTT) by judiciously managing the computational and bandwidth resources deployed in the client, fog, and cloud environment. 2) Free developers from tedious administration and operation tasks, including DNN deployment, cloud and fogs resource management. To this end, we implement a holistic cloud-fog system referred to as VPaaS (Video-Platform-as-a-Service). VPaaS adopts serverless computing to enable developers to build a video analytics pipeline by simply programming a set of functions (e.g., model inference), which are then orchestrated to process videos through carefully designed modules. To save bandwidth and reduce RTT, VPaaS provides a new video streaming protocol that only sends low-quality video to the cloud. The state-of-the-art (SOTA) DNNs deployed at the cloud can identify regions of video frames that need further processing at the fog ends. At the fog ends, misidentified labels in these regions can be corrected using a light-weight DNN model. To address the data drift issues, we incorporate limited human feedback into the system to verify the results and adopt incremental learning to improve our system continuously. The evaluation demonstrates that VPaaS is superior to several SOTA systems: it maintains high accuracy while reducing bandwidth usage by up to 21%, RTT by up to 62.5%, and cloud monetary cost by up to 50%.



rate research

Read More

The proliferation of camera-enabled devices and large video repositories has led to a diverse set of video analytics applications. These applications rely on video pipelines, represented as DAGs of operations, to transform videos, process extracted metadata, and answer questions like, Is this intersection congested? The latency and resource efficiency of pipelines can be optimized using configurable knobs for each operation (e.g., sampling rate, batch size, or type of hardware used). However, determining efficient configurations is challenging because (a) the configuration search space is exponentially large, and (b) the optimal configuration depends on users desired latency and cost targets, (c) input video contents may exercise different paths in the DAG and produce a variable amount intermediate results. Existing video analytics and processing systems leave it to the users to manually configure operations and select hardware resources. We present Llama: a heterogeneous and serverless framework for auto-tuning video pipelines. Given an end-to-end latency target, Llama optimizes for cost efficiency by (a) calculating a latency target for each operation invocation, and (b) dynamically running a cost-based optimizer to assign configurations across heterogeneous hardware that best meet the calculated per-invocation latency target. This makes the problem of auto-tuning large video pipelines tractable and allows us to handle input-dependent behavior, conditional branches in the DAG, and execution variability. We describe the algorithms in Llama and evaluate it on a cloud platform using serverless CPU and GPU resources. We show that compared to state-of-the-art cluster and serverless video analytics and processing systems, Llama achieves 7.8x lower latency and 16x cost reduction on average.
The increased use of micro-services to build web applications has spurred the rapid growth of Function-as-a-Service (FaaS) or serverless computing platforms. While FaaS simplifies provisioning and scaling for application developers, it introduces new challenges in resource management that need to be handled by the cloud provider. Our analysis of popular serverless workloads indicates that schedulers need to handle functions that are very short-lived, have unpredictable arrival patterns, and require expensive setup of sandboxes. The challenge of running a large number of such functions in a multi-tenant cluster makes existing scheduling frameworks unsuitable. We present Archipelago, a platform that enables low latency request execution in a multi-tenant serverless setting. Archipelago views each application as a DAG of functions, and every DAG in associated with a latency deadline. Archipelago achieves its per-DAG request latency goals by: (1) partitioning a given cluster into a number of smaller worker pools, and associating each pool with a semi-global scheduler (SGS), (2) using a latency-aware scheduler within each SGS along with proactive sandbox allocation to reduce overheads, and (3) using a load balancing layer to route requests for different DAGs to the appropriate SGS, and automatically scale the number of SGSs per DAG. Our testbed results show that Archipelago meets the latency deadline for more than 99% of realistic application request workloads, and reduces tail latencies by up to 36X compared to state-of-the-art serverless platforms.
Recent advances in computer vision and neural networks have made it possible for more surveillance videos to be automatically searched and analyzed by algorithms rather than humans. This happened in parallel with advances in edge computing where videos are analyzed over hierarchical clusters that contain edge devices, close to the video source. However, the current video analysis pipeline has several disadvantages when dealing with such advances. For example, video encoders have been designed for a long time to please human viewers and be agnostic of the downstream analysis task (e.g., object detection). Moreover, most of the video analytics systems leverage 2-tier architecture where the encoded video is sent to either a remote cloud or a private edge server but does not efficiently leverage both of them. In response to these advances, we present SIEVE, a 3-tier video analytics system to reduce the latency and increase the throughput of analytics over video streams. In SIEVE, we present a novel technique to detect objects in compressed video streams. We refer to this technique as semantic video encoding because it allows video encoders to be aware of the semantics of the downstream task (e.g., object detection). Our results show that by leveraging semantic video encoding, we achieve close to 100% object detection accuracy with decompressing only 3.5% of the video frames which results in more than 100x speedup compared to classical approaches that decompress every video frame.
Swarms of autonomous devices are increasing in ubiquity and size. There are two main trains of thought for controlling devices in such swarms; centralized and distributed control. Centralized platforms achieve higher output quality but result in high network traffic and limited scalability, while decentralized systems are more scalable, but less sophisticated. In this work we present HiveMind, a centralized coordination control platform for IoT swarms that is both scalable and performant. HiveMind leverages a centralized cluster for all resource-intensive computation, deferring lightweight and time-critical operations, such as obstacle avoidance to the edge devices to reduce network traffic. HiveMind employs an event-driven serverless framework to run tasks on the cluster, guarantees fault tolerance both in the edge devices and serverless functions, and handles straggler tasks and underperforming devices. We evaluate HiveMind on a swarm of 16 programmable drones on two scenarios; searching for given items, and counting unique people in an area. We show that HiveMind achieves better performance and battery efficiency compared to fully centralized and fully decentralized platforms, while also handling load imbalances and failures gracefully, and allowing edge devices to leverage the cluster to collectively improve their output quality.
Today, wearable internet-of-things (wIoT) devices continuously flood the cloud data centers at an enormous rate. This increases a demand to deploy an edge infrastructure for computing, intelligence, and storage close to the users. The emerging paradigm of fog computing could play an important role to make wIoT more efficient and affordable. Fog computing is known as the cloud on the ground. This paper presents an end-to-end architecture that performs data conditioning and intelligent filtering for generating smart analytics from wearable data. In wIoT, wearable sensor devices serve on one end while the cloud backend offers services on the other end. We developed a prototype of smart fog gateway (a middle layer) using Intel Edison and Raspberry Pi. We discussed the role of the smart fog gateway in orchestrating the process of data conditioning, intelligent filtering, smart analytics, and selective transfer to the cloud for long-term storage and temporal variability monitoring. We benchmarked the performance of developed prototypes on real-world data from smart e-textile gloves. Results demonstrated the usability and potential of proposed architecture for converting the real-world data into useful analytics while making use of knowledge-based models. In this way, the smart fog gateway enhances the end-to-end interaction between wearables (sensor devices) and the cloud.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا