ترغب بنشر مسار تعليمي؟ اضغط هنا

An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real-Time Systems

117   0   0.0 ( 0 )
 نشر من قبل Luigi Palopoli
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We show a methodology for the computation of the probability of deadline miss for a periodic real-time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade-offs can be obtained by operating on the granularity of the model. More importantly we offer a closed form conservative bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability in one real-time application of practical interest. When this bound is used for the optimisation of the overall Quality of Service for a set of tasks sharing the CPU, it produces a good sub-optimal solution in a small amount of time.

قيم البحث

اقرأ أيضاً

The design space of networked embedded systems is very large, posing challenges to the optimisation of such platforms when it comes to support applications with real-time guarantees. Recent research has shown that a number of inter-related optimisati on problems have a critical influence over the schedulability of a system, i.e. whether all its application components can execute and communicate by their respective deadlines. Examples of such optimization problems include task allocation and scheduling, communication routing and arbitration, memory allocation, and voltage and frequency scaling. In this paper, we advocate the use of evolutionary approaches to address such optimization problems, aiming to evolve individuals of increased fitness over multiple generations of potential solutions. We refer to plentiful evidence that existing real-time schedulability tests can be used effectively to guide evolutionary optimisation, either by themselves or in combination with other metrics such as energy dissipation or hardware overheads. We then push that concept one step further and consider the possibility of using evolutionary techniques to evolve the schedulability tests themselves, aiming to support the verification and optimisation of systems which are too complex for state-of-the-art (manual) derivation of schedulability tests.
We develop a data-driven approach to the computation of a-posteriori feasibility certificates to the solution sets of variational inequalities affected by uncertainty. Specifically, we focus on instances of variational inequalities with a determinist ic mapping and an uncertain feasibility set, and represent uncertainty by means of scenarios. Building upon recent advances in the scenario approach literature, we quantify the robustness properties of the entire set of solutions of a variational inequality, with feasibility set constructed using the scenario approach, against a new unseen realization of the uncertainty. Our results extend existing results that typically impose an assumption that the solution set is a singleton and require certain non-degeneracy properties, and thereby offer probabilistic feasibility guarantees to any feasible solution. We show that assessing the violation probability of an entire set of solutions, rather than of a singleton, requires enumeration of the support constraints that shape this set. Additionally, we propose a general procedure to enumerate the support constraints that does not require a closed form description of the solution set, which is unlikely to be available. We show that robust game theory problems can be modelling via uncertain variational inequalities, and illustrate our theoretical results through extensive numerical simulations on a case study involving an electric vehicle charging coordination problem.
Degradation analysis is used to analyze the useful lifetimes of systems, their failure rates, and various other system parameters like mean time to failure (MTTF), mean time between failures (MTBF), and the system failure rate (SFR). In many systems, certain possible parallel paths of execution that have greater chances of success are preferred over others. Thus we introduce here the concept of probabilistic parallel choice. We use binary and $n$-ary probabilistic choice operators in describing the selections of parallel paths. These binary and $n$-ary probabilistic choice operators are considered so as to represent the complete system (described as a series-parallel system) in terms of the probabilities of selection of parallel paths and their relevant parameters. Our approach allows us to derive new and generalized formulae for system parameters like MTTF, MTBF, and SFR. We use a generalized exponential distribution, allowing distinct installation times for individual components, and use this model to derive expressions for such system parameters.
In the past few years, we have envisioned an increasing number of businesses start driving by big data analytics, such as Amazon recommendations and Google Advertisements. At the back-end side, the businesses are powered by big data processing platfo rms to quickly extract information and make decisions. Running on top of a computing cluster, those platforms utilize scheduling algorithms to allocate resources. An efficient scheduler is crucial to the system performance due to limited resources, e.g. CPU and Memory, and a large number of user demands. However, besides requests from clients and current status of the system, it has limited knowledge about execution length of the running jobs, and incoming jobs resource demands, which make assigning resources a challenging task. If most of the resources are occupied by a long-running job, other jobs will have to keep waiting until it releases them. This paper presents a new scheduling strategy, named DRESS that particularly aims to optimize the allocation among jobs with various demands. Specifically, it classifies the jobs into two categories based on their requests, reserves a portion of resources for each of category, and dynamically adjusts the reserved ratio by monitoring the pending requests and estimating release patterns of running jobs. The results demonstrate DRESS significantly reduces the completion time for one category, up to 76.1% in our experiments, and in the meanwhile, maintains a stable overall system performance.
NaNet is an FPGA-based PCIe X8 Gen2 NIC supporting 1/10 GbE links and the custom 34 Gbps APElink channel. The design has GPUDirect RDMA capabilities and features a network stack protocol offloading module, making it suitable for building low-latency, real-time GPU-based computing systems. We provide a detailed description of the NaNet hardware modular architecture. Benchmarks for latency and bandwidth for GbE and APElink channels are presented, followed by a performance analysis on the case study of the GPU-based low level trigger for the RICH detector in the NA62 CERN experiment, using either the NaNet GbE and APElink channels. Finally, we give an outline of project future activities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا