ترغب بنشر مسار تعليمي؟ اضغط هنا

Carbon-Aware Computing for Datacenters

323   0   0.0 ( 0 )
 نشر من قبل Ian Schneider
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The amount of CO$_2$ emitted per kilowatt-hour on an electricity grid varies by time of day and substantially varies by location due to the types of generation. Networked collections of warehouse scale computers, sometimes called Hyperscale Computing, emit more carbon than needed if operated without regard to these variations in carbon intensity. This paper introduces Googles system for Carbon-Intelligent Compute Management, which actively minimizes electricity-based carbon footprint and power infrastructure costs by delaying temporally flexible workloads. The core component of the system is a suite of analytical pipelines used to gather the next days carbon intensity forecasts, train day-ahead demand prediction models, and use risk-aware optimization to generate the next days carbon-aware Virtual Capacity Curves (VCCs) for all datacenter clusters across Googles fleet. VCCs impose hourly limits on resources available to temporally flexible workloads while preserving overall daily capacity, enabling all such workloads to complete within a day. Data from operation shows that VCCs effectively limit hourly capacity when the grids energy supply mix is carbon intensive and delay the execution of temporally flexible workloads to greener times.



قيم البحث

اقرأ أيضاً

In Federated Learning (FL), a global statistical model is developed by encouraging mobile users to perform the model training on their local data and aggregating the output local model parameters in an iterative manner. However, due to limited energy and computation capability at the mobile devices, the performance of the model training is always at stake to meet the objective of local energy minimization. In this regard, Multi-access Edge Computing (MEC)-enabled FL addresses the tradeoff between the model performance and the energy consumption of the mobile devices by allowing users to offload a portion of their local dataset to an edge server for the model training. Since the edge server has high computation capability, the time consumption of the model training at the edge server is insignificant. However, the time consumption for dataset offloading from mobile users to the edge server has a significant impact on the total time consumption. Thus, resource management in MEC-enabled FL is challenging, where the objective is to reduce the total time consumption while saving the energy consumption of the mobile devices. In this paper, we formulate an energy-aware resource management for MEC-enabled FL in which the model training loss and the total time consumption are jointly minimized, while considering the energy limitation of mobile devices. In addition, we recast the formulated problem as a Generalized Nash Equilibrium Problem (GNEP) to capture the coupling constraints between the radio resource management and dataset offloading. We then analyze the impact of the dataset offloading and computing resource allocation on the model training loss, time, and the energy consumption.
As a key technology in the 5G era, Mobile Edge Computing (MEC) has developed rapidly in recent years. MEC aims to reduce the service delay of mobile users, while alleviating the processing pressure on the core network. MEC can be regarded as an exten sion of cloud computing on the user side, which can deploy edge servers and bring computing resources closer to mobile users, and provide more efficient interactions. However, due to the users dynamic mobility, the distance between the user and the edge server will change dynamically, which may cause fluctuations in Quality of Service (QoS). Therefore, when a mobile user moves in the MEC environment, certain approaches are needed to schedule services deployed on the edge server to ensure the user experience. In this paper, we model service scheduling in MEC scenarios and propose a delay-aware and mobility-aware service management approach based on concise probabilistic methods. This approach has low computational complexity and can effectively reduce service delay and migration costs. Furthermore, we conduct experiments by utilizing multiple realistic datasets and use iFogSim to evaluate the performance of the algorithm. The results show that our proposed approach can optimize the performance on service delay, with 8% to 20% improvement and reduce the migration cost by more than 75% compared with baselines during the rush hours.
Currently, drones represent a promising technology for combating Coronavirus disease 2019 (COVID-19) due to the transport of goods, medical supplies to a given target location in the quarantine areas experiencing an epidemic outbreak. Drone missions will increasingly rely on drone collaboration, which requires the drones to reduce communication complexity and be controlled in a decentralized fashion. Blockchain technology becomes a must in industrial applications because it provides decentralized data, accessibility, immutability, and irreversibility. Therefore, Blockchain makes data public for all drones and enables drones to log information concerning world states, time, location, resources, delivery data, and drone relation to all neighbors drones. This paper introduces decentralized independent multi-drones to accomplish the task collaboratively. Improving blockchain with a consensus algorithm can improve network partitioning and scalability in order to combat COVID-19. The multi-drones task is to combat COVID-19 via monitoring and detecting, social distancing, sanitization, data analysis, delivering goods and medical supplies, and announcement while avoiding collisions with one another. We discuss End to End (E2E) delivery application of combination blockchain and multi-drone in combating COVID-19 and beyond future pandemics. Furthermore, the challenges and opportunities of our proposed framework are highlighted.
The development of Internet of Things (IoT) technology enables the rapid growth of connected smart devices and mobile applications. However, due to the constrained resources and limited battery capacity, there are bottlenecks when utilizing the smart devices. Mobile edge computing (MEC) offers an attractive paradigm to handle this challenge. In this work, we concentrate on the MEC application for IoT and deal with the energy saving objective via offloading workloads between cloud and edge. In this regard, we firstly identify the energy-related challenges in MEC. Then we present a green-aware framework for MEC to address the energy-related challenges, and provide a generic model formulation for the green MEC. We also discuss some state-of-the-art workloads offloading approaches to achieve green IoT and compare them in comprehensive perspectives. Finally, some future research directions related to energy efficiency in MEC are given.
Digital In-memory computing improves energy efficiency and throughput of a data-intensive process, which incur memory thrashing and, resulting multiple same memory accesses in a von Neumann architecture. Digital in-memory computing involves accessing multiple SRAM cells simultaneously, which may result in a bit flip when not timed critically. Therefore we discuss the transient voltage characteristics of the bitlines during an SRAM compute. To improve the packaging density and also avoid MOSFET down-scaling issues, we use a 7-nm predictive PDK which uses a finFET node. The finFET process has discrete fins and a lower Voltage supply, which makes the design of in-memory compute SRAM difficult. In this paper, we design a 6T SRAM cell in 7-nm finFET node and compare its SNMs with a UMC 28nm node implementation. Further, we design and simulate the rest of the SRAM peripherals, and in-memory computation for an advanced finFET node.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا