ترغب بنشر مسار تعليمي؟ اضغط هنا

LINTS^RT: A Learning-driven Testbed for Intelligent Scheduling in Embedded Systems

72   0   0.0 ( 0 )
 نشر من قبل Yaswanth Yadlapalli
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Due to the increasing complexity seen in both workloads and hardware resources in state-of-the-art embedded systems, developing efficient real-time schedulers and the corresponding schedulability tests becomes rather challenging. Although close to optimal schedulability performance can be achieved for supporting simple system models in practice, adding any small complexity element into the problem context such as non-preemption or resource heterogeneity would cause significant pessimism, which may not be eliminated by any existing scheduling technique. In this paper, we present LINTS^RT, a learning-based testbed for intelligent real-time scheduling, which has the potential to handle various complexities seen in practice. The design of LINTS^RT is fundamentally motivated by AlphaGo Zero for playing the board game Go, and specifically addresses several critical challenges due to the real-time scheduling context. We first present a clean design of LINTS^RT for supporting the basic case: scheduling sporadic workloads on a homogeneous multiprocessor, and then demonstrate how to easily extend the framework to handle further complexities such as non-preemption and resource heterogeneity. Both application and OS-level implementation and evaluation demonstrate that LINTS^RT is able to achieve significantly higher runtime schedulability under different settings compared to perhaps the most commonly applied schedulers, global EDF, and RM. To our knowledge, this work is the first attempt to design and implement an extensible learning-based testbed for autonomously making real-time scheduling decisions.



قيم البحث

اقرأ أيضاً

Embedded computing systems today increasingly feature resource constraints and workload variability, which lead to uncertainty in resource availability. This raises great challenges to software design and programming in multitasking environments. In this paper, the emerging methodology of feedback scheduling is introduced to address these challenges. As a closed-loop approach to resource management, feedback scheduling promises to enhance the flexibility and resource efficiency of various software programs through dynamically distributing available resources among concurrent tasks based on feedback information about the actual usage of the resources. With emphasis on the behavioral design of feedback schedulers, we describe a general framework of feedback scheduling in the context of real-time control applications. A simple yet illustrative feedback scheduling algorithm is given. From a programming perspective, we describe how to modify the implementation of control tasks to facilitate the application of feedback scheduling. An event-driven paradigm that combines time-triggered and event-triggered approaches is proposed for programming of the feedback scheduler. Simulation results argue that the proposed event-driven paradigm yields better performance than time-triggered paradigm in dynamic environments where the workload varies irregularly and unpredictably.
In this ongoing work, we are interested in multiprocessor energy efficient systems, where task durations are not known in advance, but are know stochastically. More precisely, we consider global scheduling algorithms for frame-based multiprocessor st ochastic DVFS (Dynamic Voltage and Frequency Scaling) systems. Moreover, we consider processors with a discrete set of available frequencies.
For microprocessors used in real-time embedded systems, minimizing power consumption is difficult due to the timing constraints. Dynamic voltage scaling (DVS) has been incorporated into modern microprocessors as a promising technique for exploring th e trade-off between energy consumption and system performance. However, it remains a challenge to realize the potential of DVS in unpredictable environments where the system workload cannot be accurately known. Addressing system-level power-aware design for DVS-enabled embedded controllers, this paper establishes an analytical model for the DVS system that encompasses multiple real-time control tasks. From this model, a feedback control based approach to power management is developed to reduce dynamic power consumption while achieving good application performance. With this approach, the unpredictability and variability of task execution times can be attacked. Thanks to the use of feedback control theory, predictable performance of the DVS system is achieved, which is favorable to real-time applications. Extensive simulations are conducted to evaluate the performance of the proposed approach.
Recent commercial hardware platforms for embedded real-time systems feature heterogeneous processing units and computing accelerators on the same System-on-Chip. When designing complex real-time application for such architectures, the designer needs to make a number of difficult choices: on which processor should a certain task be implemented? Should a component be implemented in parallel or sequentially? These choices may have a great impact on feasibility, as the difference in the processor internal architectures impact on the tasks execution time and preemption cost. To help the designer explore the wide space of design choices and tune the scheduling parameters, in this paper we propose a novel real-time application model, called C-DAG, specifically conceived for heterogeneous platforms. A C-DAG allows to specify alternative implementations of the same component of an application for different processing engines to be selected off-line, as well as conditional branches to model if-then-else statements to be selected at run-time. We also propose a schedulability analysis for the C-DAG model and a heuristic allocation algorithm so that all deadlines are respected. Our analysis takes into account the cost of preempting a task, which can be non-negligible on certain processors. We demonstrate the effectiveness of our approach on a large set of synthetic experiments by comparing with state of the art algorithms in the literature.
Future beyond fifth-generation (B5G) and sixth-generation (6G) mobile communications will shift from facilitating interpersonal communications to supporting Internet of Everything (IoE), where intelligent communications with full integration of big d ata and artificial intelligence (AI) will play an important role in improving network efficiency and providing high-quality service. As a rapid evolving paradigm, the AI-empowered mobile communications demand large amounts of data acquired from real network environment for systematic test and verification. Hence, we build the worlds first true-data testbed for 5G/B5G intelligent network (TTIN), which comprises 5G/B5G on-site experimental networks, data acquisition & data warehouse, and AI engine & network optimization. In the TTIN, true network data acquisition, storage, standardization, and analysis are available, which enable system-level online verification of B5G/6G-orientated key technologies and support data-driven network optimization through the closed-loop control mechanism. This paper elaborates on the system architecture and module design of TTIN. Detailed technical specifications and some of the established use cases are also showcased.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا