ترغب بنشر مسار تعليمي؟ اضغط هنا

Multiprocessor Global Scheduling on Frame-Based DVFS Systems

127   0   0.0 ( 0 )
 نشر من قبل Vandy Berten
 تاريخ النشر 2008
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this ongoing work, we are interested in multiprocessor energy efficient systems, where task durations are not known in advance, but are know stochastically. More precisely, we consider global scheduling algorithms for frame-based multiprocessor stochastic DVFS (Dynamic Voltage and Frequency Scaling) systems. Moreover, we consider processors with a discrete set of available frequencies.



قيم البحث

اقرأ أيضاً

When integrating hard, soft and non-real-time tasks in general purpose operating systems, it is necessary to provide temporal isolation so that the timing properties of one task do not depend on the behaviour of the others. However, strict budget enf orcement can lead to inefficient use of the computational resources in the presence of tasks with variable workload. Many resource reclaiming algorithms have been proposed in the literature for single processor scheduling, but not enough work exists for global scheduling in multiprocessor systems. In this report, we propose two reclaiming algorithms for multiprocessor global scheduling and we prove their correctness.
Due to the increasing complexity seen in both workloads and hardware resources in state-of-the-art embedded systems, developing efficient real-time schedulers and the corresponding schedulability tests becomes rather challenging. Although close to op timal schedulability performance can be achieved for supporting simple system models in practice, adding any small complexity element into the problem context such as non-preemption or resource heterogeneity would cause significant pessimism, which may not be eliminated by any existing scheduling technique. In this paper, we present LINTS^RT, a learning-based testbed for intelligent real-time scheduling, which has the potential to handle various complexities seen in practice. The design of LINTS^RT is fundamentally motivated by AlphaGo Zero for playing the board game Go, and specifically addresses several critical challenges due to the real-time scheduling context. We first present a clean design of LINTS^RT for supporting the basic case: scheduling sporadic workloads on a homogeneous multiprocessor, and then demonstrate how to easily extend the framework to handle further complexities such as non-preemption and resource heterogeneity. Both application and OS-level implementation and evaluation demonstrate that LINTS^RT is able to achieve significantly higher runtime schedulability under different settings compared to perhaps the most commonly applied schedulers, global EDF, and RM. To our knowledge, this work is the first attempt to design and implement an extensible learning-based testbed for autonomously making real-time scheduling decisions.
This paper presents improved approximation algorithms for the problem of multiprocessor scheduling under uncertainty, or SUU, in which the execution of each job may fail probabilistically. This problem is motivated by the increasing use of distribute d computing to handle large, computationally intensive tasks. In the SUU problem we are given n unit-length jobs and m machines, a directed acyclic graph G of precedence constraints among jobs, and unrelated failure probabilities q_{ij} for each job j when executed on machine i for a single timestep. Our goal is to find a schedule that minimizes the expected makespan, which is the expected time at which all jobs complete. Lin and Rajaraman gave the first approximations for this NP-hard problem for the special cases of independent jobs, precedence constraints forming disjoint chains, and precedence constraints forming trees. In this paper, we present asymptotically better approximation algorithms. In particular, we give an O(loglog min(m,n))-approximation for independent jobs (improving on the previously best O(log n)-approximation). We also give an O(log(n+m) loglog min(m,n))-approximation algorithm for precedence constraints that form disjoint chains (improving on the previously best O(log(n)log(m)log(n+m)/loglog(n+m))-approximation by a (log n/loglog n)^2 factor when n = poly(m). Our algorithm for precedence constraints forming chains can also be used as a component for precedence constraints forming trees, yielding a similar improvement over the previously best algorithms for trees.
Embedded computing systems today increasingly feature resource constraints and workload variability, which lead to uncertainty in resource availability. This raises great challenges to software design and programming in multitasking environments. In this paper, the emerging methodology of feedback scheduling is introduced to address these challenges. As a closed-loop approach to resource management, feedback scheduling promises to enhance the flexibility and resource efficiency of various software programs through dynamically distributing available resources among concurrent tasks based on feedback information about the actual usage of the resources. With emphasis on the behavioral design of feedback schedulers, we describe a general framework of feedback scheduling in the context of real-time control applications. A simple yet illustrative feedback scheduling algorithm is given. From a programming perspective, we describe how to modify the implementation of control tasks to facilitate the application of feedback scheduling. An event-driven paradigm that combines time-triggered and event-triggered approaches is proposed for programming of the feedback scheduler. Simulation results argue that the proposed event-driven paradigm yields better performance than time-triggered paradigm in dynamic environments where the workload varies irregularly and unpredictably.
Recent commercial hardware platforms for embedded real-time systems feature heterogeneous processing units and computing accelerators on the same System-on-Chip. When designing complex real-time application for such architectures, the designer needs to make a number of difficult choices: on which processor should a certain task be implemented? Should a component be implemented in parallel or sequentially? These choices may have a great impact on feasibility, as the difference in the processor internal architectures impact on the tasks execution time and preemption cost. To help the designer explore the wide space of design choices and tune the scheduling parameters, in this paper we propose a novel real-time application model, called C-DAG, specifically conceived for heterogeneous platforms. A C-DAG allows to specify alternative implementations of the same component of an application for different processing engines to be selected off-line, as well as conditional branches to model if-then-else statements to be selected at run-time. We also propose a schedulability analysis for the C-DAG model and a heuristic allocation algorithm so that all deadlines are respected. Our analysis takes into account the cost of preempting a task, which can be non-negligible on certain processors. We demonstrate the effectiveness of our approach on a large set of synthetic experiments by comparing with state of the art algorithms in the literature.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا