ترغب بنشر مسار تعليمي؟ اضغط هنا

Avalon: Building an Operating System for Robotcenter

130   0   0.0 ( 0 )
 نشر من قبل Yuan Xu
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper envisions a scenario that hundreds of heterogeneous robots form a robotcenter which can be shared by multiple users and used like a single powerful robot to perform complex tasks. However, current multi-robot systems are either unable to manage heterogeneous robots or unable to support multiple concurrent users. Inspired by the design of modern datacenter OSes, we propose Avalon, a robot operating system with two-level scheduling scheme which is widely adopted in datacenters for Internet services and cloud computing. Specifically, Avalon integrates three important features together: (1) Instead of allocating a whole robot, Avalon classifies fine-grained robot resources into three categories to distinguish which fine-grained resources can be shared by multi-robot frameworks simultaneously. (2) Avalon adopts a location based resource allocation policy to substantially reduce scheduling overhead. (3) Avalon enables robots to offload computation intensive tasks to the clouds.We have implemented and evaluated Avalon on robots on both simulated environments and real world.

قيم البحث

اقرأ أيضاً

Rapid growth of datacenter (DC) scale, urgency of cost control, increasing workload diversity, and huge software investment protection place unprecedented demands on the operating system (OS) efficiency, scalability, performance isolation, and backwa rd-compatibility. The traditional OSes are not built to work with deep-hierarchy software stacks, large numbers of cores, tail latency guarantee, and increasingly rich variety of applications seen in modern DCs, and thus they struggle to meet the demands of such workloads. This paper presents XOS, an application-defined OS for modern DC servers. Our design moves resource management out of the OS kernel, supports customizable kernel subsystems in user space, and enables elastic partitioning of hardware resources. Specifically, XOS leverages modern hardware support for virtualization to move resource management functionality out of the conventional kernel and into user space, which lets applications achieve near bare-metal performance. We implement XOS on top of Linux to provide backward compatibility. XOS speeds up a set of DC workloads by up to 1.6X over our baseline Linux on a 24-core server, and outperforms the state-of-the-art Dune by up to 3.3X in terms of virtual memory management. In addition, XOS demonstrates good scalability and strong performance isolation.
The D0 experiment at Fermilabs Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed c apabilities of any one institution. Moreover, the widely scattered geographical distribution of D0 collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in D0 by developing a grid in the D0 Southern Analysis Region (D0SAR), D0SAR-Grid, using all available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the D0SAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.
We present an implementation of SOTER, a run-time assurance framework for building safe distributed mobile robotic (DMR) systems, on top of the Robot Operating System (ROS). The safety of DMR systems cannot always be guaranteed at design time, especi ally when complex, off-the-shelf components are used that cannot be verified easily. SOTER addresses this by providing a language-based approach for run-time assurance for DMR systems. SOTER implements the reactive robotic software using the language P, a domain-specific language designed for implementing asynchronous event-driven systems, along with an integrated run-time assurance system that allows programmers to use unfortified components but still provide safety guarantees. We describe an implementation of SOTER for ROS and demonstrate its efficacy using a multi-robot surveillance case study, with multiple run-time assurance modules. Through rigorous simulation, we show that SOTER enabled systems ensure safety, even when using unknown and untrusted components.
Autonomous Driving is now the promising future of transportation. As one basis for autonomous driving, High Definition Map (HD map) provides high-precision descriptions of the environment, therefore it enables more accurate perception and localizatio n while improving the efficiency of path planning. However, an extremely large amount of map data needs to be transmitted during driving, thus posing great challenge for real-time and safety requirements for autonomous driving. To this end, we first demonstrate how the existing data distribution mechanism can support HD map services. Furthermore, considering the constraints of vehicle power, vehicle speed, base station bandwidth, etc., we propose a HD map data distribution mechanism on top of Vehicle-to-Infrastructure (V2I) data transmission. By this mechanism, the map provision task is allocated to the selected RSU nodes and transmits proportionate HD map data cooperatively. Their works on map data loading aims to provide in-time HD map data service with optimized in-vehicle energy consumption. Finally, we model the selection of RSU nodes into a partial knapsack problem and propose a greedy strategy-based data transmission algorithm. Experimental results confirm that within limited energy consumption, the proposed mechanism can ensure HD map data service by coordinating multiple RSUs with the shortest data transmission time.
A joint project between the Canadian Astronomy Data Center of the National Research Council Canada, and the italian Istituto Nazionale di Astrofisica-Osservatorio Astronomico di Trieste (INAF-OATs), partially funded by the EGI-Engage H2020 European P roject, is devoted to deploy an integrated infrastructure, based on the International Virtual Observatory Alliance (IVOA) standards, to access and exploit astronomical data. Currently CADC-CANFAR provides scientists with an access, storage and computation facility, based on software libraries implementing a set of standards developed by the International Virtual Observatory Alliance (IVOA). The deployment of a twin infrastructure, basically built on the same open source software libraries, has been started at INAF-OATs. This new infrastructure now provides users with an Access Control Service and a Storage Service. The final goal of the ongoing project is to build an integrated infrastructure geographycally distributed providing complete interoperability, both in users access control and data sharing. This paper describes the target infrastructure, the main user requirements covered, the technical choices and the implemented solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا