Do you want to publish a course? Click here

Machine Learning Approach for Device-Circuit Co-Optimization of Stochastic-Memristive-Device-Based Boltzmann Machine

138   0   0.0 ( 0 )
 Added by Tong Wu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

A Boltzmann machine whose effective temperature can be dynamically cooled provides a stochastic neural network realization of simulated annealing, which is an important metaheuristic for solving combinatorial or global optimization problems with broad applications in machine intelligence and operations research. However, the hardware realization of the Boltzmann stochastic element with cooling capability has never been achieved within an individual semiconductor device. Here we demonstrate a new memristive device concept based on two-dimensional material heterostructures that enables this critical stochastic element in a Boltzmann machine. The dynamic cooling effect in simulated annealing can be emulated in this multi-terminal memristive device through electrostatic bias with sigmoidal thresholding distributions. We also show that a machine-learning-based method is efficient for device-circuit co-design of the Boltzmann machine based on the stochastic memristor devices in simulated annealing. The experimental demonstrations of the tunable stochastic memristors combined with the machine-learning-based device-circuit co-optimization approach for stochastic-memristor-based neural-network circuits chart a pathway for the efficient hardware realization of stochastic neural networks with applications in a broad range of electronics and computing disciplines.



rate research

Read More

This work presents a novel general compact model for 7nm technology node devices like FinFETs. As an extension of previous conventional compact model that based on some less accurate elements including one-dimensional Poisson equation for three-dimensional devices and analytical equations for short channel effects, quantum effects and other physical effects, the general compact model combining few TCAD calibrated compact models with statistical methods can eliminate the tedious physical derivations. The general compact model has the advantages of efficient extraction, high accuracy, strong scaling capability and excellent transfer capability. As a demo application, two key design knobs of FinFET and their multiple impacts on RC control ESD power clamp circuit are systematically evaluated with implementation of the newly proposed general compact model, accounting for device design, circuit performance optimization and variation control. The performance of ESD power clamp can be improved extremely. This framework is also suitable for pathfinding researches on 5nm node gate-all-around devices, like nanowire (NW) FETs, nanosheet (NSH) FETs and beyond.
The quantum circuit layout problem is to map a quantum circuit to a quantum computing device, such that the constraints of the device are satisfied. The optimality of a layout method is expressed, in our case, by the depth of the resulting circuits. We introduce QXX, a novel search-based layout method, which includes a configurable Gaussian function used to: emph{i)} estimate the depth of the generated circuits; emph{ii)} determine the circuit region that influences most the depth. We optimize the parameters of the QXX model using an improved version of random search (weighted random search). To speed up the parameter optimization, we train and deploy QXX-MLP, an MLP neural network which can predict the depth of the circuit layouts generated by QXX. We experimentally compare the two approaches (QXX and QXX-MLP) with the baseline: exponential time exhaustive search optimization. According to our results: 1) QXX is on par with state-of-the-art layout methods, 2) the Gaussian function is a fast and accurate optimality estimator. We present empiric evidence for the feasibility of learning the layout method using approximation.
As the promise of molecular communication via diffusion systems at nano-scale communication increases, designing molecular schemes robust to the inevitable effects of molecular interference has become of vital importance. We propose a novel approach of a CNN-based neural network architecture for a uniquely-designed molecular multiple-input-single-output topology in order to alleviate the damaging effects of molecular interference. In this study, we compare the performance of the proposed network with a naive-approach index modulation scheme and symbol-by-symbol maximum likelihood estimation with respect to bit error rate, and demonstrate that the proposed method yields better performance.
In this study, a SPICE model for negative capacitance vertical nanowire field-effect-transistor (NC VNW-FET) based on BSIM-CMG model and Landau-Khalatnikov (LK) equation was presented. Suffering from the limitation of short gate length there is lack of controllable and integrative structures for high performance NC VNW-FETs. A new kind of structure was proposed for NC VNW-FETs at sub-3nm node. Moreover, in order to understand and improve NC VNW-FETs, the S-shaped polarization-voltage curve (S-curve) was divided into four regions and some new design rules were proposed. By using the SPICE model, device-circuit co-optimization was implemented. The co-design of gate work function (WF) and NC was investigated. A ring oscillator was simulated to analyze the circuit energy-delay, and it shown that significant energy reduction, up to 88%, at iso-delay for NC VNW-FETs at low supply voltage can be achieved. This study gives a credible method to analysis the performance of NC based devices and circuits and reveals the potential of NC VNW-FETs in low-power applications.
The predominant paradigm for using machine learning models on a device is to train a model in the cloud and perform inference using the trained model on the device. However, with increasing number of smart devices and improved hardware, there is interest in performing model training on the device. Given this surge in interest, a comprehensive survey of the field from a device-agnostic perspective sets the stage for both understanding the state-of-the-art and for identifying open challenges and future avenues of research. However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.). Hence, covering such a large number of topics in a single survey is impractical. This survey finds a middle ground by reformulating the problem of on-device learning as resource constrained learning where the resources are compute and memory. This reformulation allows tools, techniques, and algorithms from a wide variety of research areas to be compared equitably. In addition to summarizing the state-of-the-art, the survey also identifies a number of challenges and next steps for both the algorithmic and theoretical aspects of on-device learning.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا