ترغب بنشر مسار تعليمي؟ اضغط هنا

Control-theoretic dynamic voltage scaling for embedded controllers

202   0   0.0 ( 0 )
 نشر من قبل Feng Xia
 تاريخ النشر 2008
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

For microprocessors used in real-time embedded systems, minimizing power consumption is difficult due to the timing constraints. Dynamic voltage scaling (DVS) has been incorporated into modern microprocessors as a promising technique for exploring the trade-off between energy consumption and system performance. However, it remains a challenge to realize the potential of DVS in unpredictable environments where the system workload cannot be accurately known. Addressing system-level power-aware design for DVS-enabled embedded controllers, this paper establishes an analytical model for the DVS system that encompasses multiple real-time control tasks. From this model, a feedback control based approach to power management is developed to reduce dynamic power consumption while achieving good application performance. With this approach, the unpredictability and variability of task execution times can be attacked. Thanks to the use of feedback control theory, predictable performance of the DVS system is achieved, which is favorable to real-time applications. Extensive simulations are conducted to evaluate the performance of the proposed approach.



قيم البحث

اقرأ أيضاً

163 - Feng Xia , Liping Liu , Longhua Ma 2008
The goal of this work is to minimize the energy dissipation of embedded controllers without jeopardizing the quality of control (QoC). Taking advantage of the dynamic voltage scaling (DVS) technology, this paper develops a performance-aware power man agement scheme for embedded controllers with processors that allow multiple voltage levels. The periods of control tasks are adapted online with respect to the current QoC, thus facilitating additional energy reduction over standard DVS. To avoid the waste of CPU resources as a result of the discrete voltage levels, a resource reclaiming mechanism is employed to maximize the CPU utilization and also to improve the QoC. Simulations are conducted to evaluate the performance of the proposed scheme. Compared with the optimal standard DVS scheme, the proposed scheme is shown to be able to save remarkably more energy while maintaining comparable QoC.
Due to the increasing complexity seen in both workloads and hardware resources in state-of-the-art embedded systems, developing efficient real-time schedulers and the corresponding schedulability tests becomes rather challenging. Although close to op timal schedulability performance can be achieved for supporting simple system models in practice, adding any small complexity element into the problem context such as non-preemption or resource heterogeneity would cause significant pessimism, which may not be eliminated by any existing scheduling technique. In this paper, we present LINTS^RT, a learning-based testbed for intelligent real-time scheduling, which has the potential to handle various complexities seen in practice. The design of LINTS^RT is fundamentally motivated by AlphaGo Zero for playing the board game Go, and specifically addresses several critical challenges due to the real-time scheduling context. We first present a clean design of LINTS^RT for supporting the basic case: scheduling sporadic workloads on a homogeneous multiprocessor, and then demonstrate how to easily extend the framework to handle further complexities such as non-preemption and resource heterogeneity. Both application and OS-level implementation and evaluation demonstrate that LINTS^RT is able to achieve significantly higher runtime schedulability under different settings compared to perhaps the most commonly applied schedulers, global EDF, and RM. To our knowledge, this work is the first attempt to design and implement an extensible learning-based testbed for autonomously making real-time scheduling decisions.
193 - Wei Lou , Lei Xun , Amin Sabet 2021
Mobile and embedded platforms are increasingly required to efficiently execute computationally demanding DNNs across heterogeneous processing elements. At runtime, the available hardware resources to DNNs can vary considerably due to other concurrent ly running applications. The performance requirements of the applications could also change under different scenarios. To achieve the desired performance, dynamic DNNs have been proposed in which the number of channels/layers can be scaled in real time to meet different requirements under varying resource constraints. However, the training process of such dynamic DNNs can be costly, since platform-aware models of different deployment scenarios must be retrained to become dynamic. This paper proposes Dynamic-OFA, a novel dynamic DNN approach for state-of-the-art platform-aware NAS models (i.e. Once-for-all network (OFA)). Dynamic-OFA pre-samples a family of sub-networks from a static OFA backbone model, and contains a runtime manager to choose different sub-networks under different runtime environments. As such, Dynamic-OFA does not need the traditional dynamic DNN training pipeline. Compared to the state-of-the-art, our experimental results using ImageNet on a Jetson Xavier NX show that the approach is up to 3.5x (CPU), 2.4x (GPU) faster for similar ImageNet Top-1 accuracy, or 3.8% (CPU), 5.1% (GPU) higher accuracy at similar latency.
59 - Yuxin Ren , Vlad Nitu , Guyue Liu 2019
In the future, computing will be immersed in the world around us -- from augmented reality to autonomous vehicles to the Internet of Things. Many of these smart devices will offer services that respond in real time to their physical surroundings, req uiring complex processing with strict performance guarantees. Edge clouds promise a pervasive computational infrastructure a short network hop away from end devices, but todays operating systems are a poor fit to meet the goals of scalable isolation, dense multi-tenancy, and predictable performance required by these emerging applications. In this paper we present EdgeOS, a micro-kernel based operating system that meets these goals by blending recent advances in real-time systems and network function virtualization. EdgeOS introduces a Featherweight Process model that offers lightweight isolation and supports extreme scalability even under high churn. Our architecture provides efficient communication mechanisms, and low-overhead per-client isolation. To achieve high performance networking, EdgeOS employs kernel bypass paired with the isolation properties of Featherweight Processes. We have evaluated our EdgeOS prototype for running high scale network middleboxes using the Click software router and endpoint applications using memcached. EdgeOS reduces startup latency by 170X compared to Linux processes and over five orders of magnitude compared to containers, while providing three orders of magnitude latency improvement when running 300 to 1000 edge-cloud memcached instances on one server.
Web application performance is heavily reliant on the hit rate of memory-based caches. Current DRAM-based web caches statically partition their memory across multiple applications sharing the cache. This causes under utilization of memory which negat ively impacts cache hit rates. We present Memshare, a novel web memory cache that dynamically manages memory across applications. Memshare provides a resource sharing model that guarantees private memory to different applications while dynamically allocating the remaining shared memory to optimize overall hit rate. Todays high cost of DRAM storage and the availability of high performance CPU and memory bandwidth, make web caches memory capacity bound. Memshares log-structured design allows it to provide significantly higher hit rates and dynamically partition memory among applications at the expense of increased CPU and memory bandwidth consumption. In addition, Memshare allows applications to use their own eviction policy for their objects, independent of other applications. We implemented Memshare and ran it on a week-long trace from a commercial memcached provider. We demonstrate that Memshare increases the combined hit rate of the applications in the trace by an 6.1% (from 84.7% hit rate to 90.8% hit rate) and reduces the total number of misses by 39.7% without affecting system throughput or latency. Even for single-tenant applications, Memshare increases the average hit rate of the current state-of-the-art memory cache by an additional 2.7% on our real-world trace.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا