ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Neural Network Approach to Estimate Early Worst-Case Execution Time

164   0   0.0 ( 0 )
 نشر من قبل Vikash Kumar
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Vikash Kumar




اسأل ChatGPT حول البحث

Estimating Worst-Case Execution Time (WCET) is of utmost importance for developing Cyber-Physical and Safety-Critical Systems. The systems scheduler uses the estimated WCET to schedule each task of these systems, and failure may lead to catastrophic events. It is thus imperative to build provably reliable systems. WCET is available to us in the last stage of systems development when the hardware is available and the application code is compiled on it. Different methodologies measure the WCET, but none of them give early insights on WCET, which is crucial for system development. If the system designers overestimate WCET in the early stage, then it would lead to the overqualified system, which will increase the cost of the final product, and if they underestimate WCET in the early stage, then it would lead to financial loss as the system would not perform as expected. This paper estimates early WCET using Deep Neural Networks as an approximate predictor model for hardware architecture and compiler. This model predicts the WCET based on the source code without compiling and running on the hardware architecture. Our WCET prediction model is created using the Pytorch framework. The resulting WCET is too erroneous to be used as an upper bound on the WCET. However, getting these results in the early stages of system development is an essential prerequisite for the systems dimensioning and configuration of the hardware setup.

قيم البحث

اقرأ أيضاً

Energy efficient real-time task scheduling attracted a lot of attention in the past decade. Most of the time, deterministic execution lengths for tasks were considered, but this model fits less and less with the reality, especially with the increasin g number of multimedia applications. Its why a lot of research is starting to consider stochastic models, where execution times are only known stochastically. However, authors consider that they have a pretty much precise knowledge about the properties of the system, especially regarding to the worst case execution time (or worst case execution cycles, WCEC). In this work, we try to relax this hypothesis, and assume that the WCEC can vary. We propose miscellaneous methods to react to such a situation, and give many simulation results attesting that with a small effort, we can provide very good results, allowing to keep a low deadline miss rate as well as an energy consumption similar to clairvoyant algorithms.
Forecasting high-dimensional time series plays a crucial role in many applications such as demand forecasting and financial predictions. Modern datasets can have millions of correlated time-series that evolve together, i.e they are extremely high dim ensional (one dimension for each individual time-series). There is a need for exploiting global patterns and coupling them with local calibration for better prediction. However, most recent deep learning approaches in the literature are one-dimensional, i.e, even though they are trained on the whole dataset, during prediction, the future forecast for a single dimension mainly depends on past values from the same dimension. In this paper, we seek to correct this deficiency and propose DeepGLO, a deep forecasting model which thinks globally and acts locally. In particular, DeepGLO is a hybrid model that combines a global matrix factorization model regularized by a temporal convolution network, along with another temporal network that can capture local properties of each time-series and associated covariates. Our model can be trained effectively on high-dimensional but diverse time series, where different time series can have vastly different scales, without a priori normalization or rescaling. Empirical results demonstrate that DeepGLO can outperform state-of-the-art approaches; for example, we see more than 25% improvement in WAPE over other methods on a public dataset that contains more than 100K-dimensional time series.
This paper introduces for the first time a framework to obtain provable worst-case guarantees for neural network performance, using learning for optimal power flow (OPF) problems as a guiding example. Neural networks have the potential to substantial ly reduce the computing time of OPF solutions. However, the lack of guarantees for their worst-case performance remains a major barrier for their adoption in practice. This work aims to remove this barrier. We formulate mixed-integer linear programs to obtain worst-case guarantees for neural network predictions related to (i) maximum constraint violations, (ii) maximum distances between predicted and optimal decision variables, and (iii) maximum sub-optimality. We demonstrate our methods on a range of PGLib-OPF networks up to 300 buses. We show that the worst-case guarantees can be up to one order of magnitude larger than the empirical lower bounds calculated with conventional methods. More importantly, we show that the worst-case predictions appear at the boundaries of the training input domain, and we demonstrate how we can systematically reduce the worst-case guarantees by training on a larger input domain than the domain they are evaluated on.
Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations. Recent work proposes networks with layer outputs which are no longer quantized but are soluti ons of an ordinary differential equation (ODE); however, these networks are still optimized via discrete methods (e.g. gradient descent). In this paper, we explore a different direction: namely, we propose a novel framework for learning in which the parameters themselves are solutions of ODEs. By viewing the optimization process as the evolution of a port-Hamiltonian system, we can ensure convergence to a minimum of the objective function. Numerical experiments have been performed to show the validity and effectiveness of the proposed methods.
129 - Fotios Gioulekas 2018
Existing model-based processes for embedded real-time systems support the analysis of various non-functional properties, most notably schedulability, through model checking, simulation or other means. The analysis results are then used for modifying the systems design, so that the expected properties are satisfied. A rigorous model-based design flow differs in that it aims at a system implementation derived from high-level models by applying a sequence of semantics-preserving transformations. Properties established at any design step are preserved throughout the subsequent steps including the executable implementation. We introduce such a design flow using a process network model of computation for application design at a high level, which combines streaming and reactive control processing with task parallelism. The schedulability of the so-called FPPNs (Fixed Priority Process Networks) is well-studied and various solutions have been presented. This article focuses on the design flows steps for deriving executable implementations on the BIP (Behavior - Interaction - Priority) runtime environment. FPPNs are designed using the TASTE toolset, a convenient architecture description interface. In this way, the developers do not program explicitly low-level real-time OS services and the schedulability properties are guaranteed throughout the design steps by construction. The approach has been validated on the design of a real spacecraft on-board application that has been scheduled for execution on an industrial multicore platform.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا