Do you want to publish a course? Click here

ENOS: Energy-Aware Network Operator Search for Hybrid Digital and Compute-in-Memory DNN Accelerators

62   0   0.0 ( 0 )
 Added by Shamma Nasrin
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

This work proposes a novel Energy-Aware Network Operator Search (ENOS) approach to address the energy-accuracy trade-offs of a deep neural network (DNN) accelerator. In recent years, novel inference operators have been proposed to improve the computational efficiency of a DNN. Augmenting the operators, their corresponding novel computing modes have also been explored. However, simplification of DNN operators invariably comes at the cost of lower accuracy, especially on complex processing tasks. Our proposed ENOS framework allows an optimal layer-wise integration of inference operators and computing modes to achieve the desired balance of energy and accuracy. The search in ENOS is formulated as a continuous optimization problem, solvable using typical gradient descent methods, thereby scalable to larger DNNs with minimal increase in training cost. We characterize ENOS under two settings. In the first setting, for digital accelerators, we discuss ENOS on multiply-accumulate (MAC) cores that can be reconfigured to different operators. ENOS training methods with single and bi-level optimization objectives are discussed and compared. We also discuss a sequential operator assignment strategy in ENOS that only learns the assignment for one layer in one training step, enabling greater flexibility in converging towards the optimal operator allocations. Furthermore, following Bayesian principles, a sampling-based variational mode of ENOS is also presented. ENOS is characterized on popular DNNs ShuffleNet and SqueezeNet on CIFAR10 and CIFAR100.



rate research

Read More

Self-healing capability is one of the most critical factors for a resilient distribution system, which requires intelligent agents to automatically perform restorative actions online, including network reconfiguration and reactive power dispatch. These agents should be equipped with a predesigned decision policy to meet real-time requirements and handle highly complex $N-k$ scenarios. The disturbance randomness hampers the application of exploration-dominant algorithms like traditional reinforcement learning (RL), and the agent training problem under $N-k$ scenarios has not been thoroughly solved. In this paper, we propose the imitation learning (IL) framework to train such policies, where the agent will interact with an expert to learn its optimal policy, and therefore significantly improve the training efficiency compared with the RL methods. To handle tie-line operations and reactive power dispatch simultaneously, we design a hybrid policy network for such a discrete-continuous hybrid action space. We employ the 33-node system under $N-k$ disturbances to verify the proposed framework.
Deep neural network (DNN) accelerators received considerable attention in past years due to saved energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors in (quantized) DNN weights significantly. This leads to high energy savings from both low-voltage operation as well as low-precision quantization. Our approach generalizes across operating voltages and accelerators, as demonstrated on bit errors from profiled SRAM arrays. We also discuss why weight clipping alone is already a quite effective way to achieve robustness against bit errors. Moreover, we specifically discuss the involved trade-offs regarding accuracy, robustness and precision: Without losing more than 1% in accuracy compared to a normally trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even for 4-bit DNNs.
Growing amount of hydraulic fracturing (HF) jobs in the recent two decades resulted in a significant amount of measured data available for development of predictive models via machine learning (ML). In multistage fractured completions, post-fracturing production analysis reveals that different stages produce very non-uniformly due to a combination of geomechanics and fracturing design factors. Hence, there is a significant room for improvement of current design practices. The workflow is essentially split into two stages. As a result of the first stage, the present paper summarizes the efforts into the creation of a digital database of field data from several thousands of multistage HF jobs on wells from circa 20 different oilfields in Western Siberia, Russia. In terms of the number of points (fracturing jobs), the present database is a rare case of a representative dataset of about 5000 data points. Each point in the data base contains the vector of 92 input variables (the reservoir, well and the frac design parameters) and the vector of production data, which is characterized by 16 parameters, including the target, cumulative oil production. Data preparation has been done using various ML techniques: the problem of missing values in the database is solved with collaborative filtering for data imputation; outliers are removed using visualisation of cluster data structure by t-SNE algorithm. The production forecast problem is solved via CatBoost algorithm. Prediction capability of the model is measured with the coefficient of determination (R^2) and reached 0.815. The inverse problem (selecting an optimum set of fracturing design parameters to maximize production) will be considered in the second part of the study to be published in another paper, along with a recommendation system for advising DESC and production stimulation engineers on an optimized fracturing design.
The need for robust control laws is especially important in safety-critical applications. We propose robust hybrid control barrier functions as a means to synthesize control laws that ensure robust safety. Based on this notion, we formulate an optimization problem for learning robust hybrid control barrier functions from data. We identify sufficient conditions on the data such that feasibility of the optimization problem ensures correctness of the learned robust hybrid control barrier functions. Our techniques allow us to safely expand the region of attraction of a compass gait walker that is subject to model uncertainty.
222 - Zhe Wang , Xinhang Li , Tianhao Wu 2021
Federated Deep Learning (FDL) is helping to realize distributed machine learning in the Internet of Vehicles (IoV). However, FDLs global model needs multiple clients to upload learning model parameters, thus still existing unavoidable communication overhead and data privacy risks. The recently proposed Swarm Learning (SL) provides a decentralized machine-learning approach uniting edge computing and blockchain-based coordination without the need for a central coordinator. This paper proposes a Swarm-Federated Deep Learning framework in the IoV system (IoV-SFDL) that integrates SL into the FDL framework. The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL, then aggregates the global FDL model among different SL groups with a proposed credibility weights prediction algorithm. Extensive experimental results demonstrate that compared with the baseline frameworks, the proposed IoV-SFDL framework achieves a 16.72% reduction in edge-to-global communication overhead while improving about 5.02% in model performance with the same training iterations.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا