No Arabic abstract
There is widespread interest in emerging technologies, especially resistive crossbars for accelerating Deep Neural Networks (DNNs). Resistive crossbars offer a highly-parallel and efficient matrix-vector-multiplication (MVM) operation. MVM being the most dominant operation in DNNs makes crossbars ideally suited. However, various sources of device and circuit non-idealities lead to errors in the MVM output, thereby reducing DNN accuracy. Towards that end, we propose crossbar re-mapping strategies to mitigate line-resistance induced accuracy degradation in DNNs, without having to re-train the learned weights, unlike most prior works. Line-resistances degrade the voltage levels along the crossbar columns, thereby inducing more errors at the columns away from the drivers. We rank the DNN weights and kernels based on a sensitivity analysis, and re-arrange the columns such that the most sensitive kernels are mapped closer to the drivers, thereby minimizing the impact of errors on the overall accuracy. We propose two algorithms $-$ static remapping strategy (SRS) and dynamic remapping strategy (DRS), to optimize the crossbar re-arrangement of a pre-trained DNN. We demonstrate the benefits of our approach on a standard VGG16 network trained using CIFAR10 dataset. Our results show that SRS and DRS limit the accuracy degradation to 2.9% and 2.1%, respectively, compared to a 5.6% drop from an as it is mapping of weights and kernels to crossbars. We believe this work brings an additional aspect for optimization, which can be used in tandem with existing mitigation techniques, such as in-situ compensation, technology aware training and re-training approaches, to enhance system performance.
We propose a technology-independent method, referred to as adjacent connection matrix (ACM), to efficiently map signed weight matrices to non-negative crossbar arrays. When compared to same-hardware-overhead mapping methods, using ACM leads to improvements of up to 20% in training accuracy for ResNet-20 with the CIFAR-10 dataset when training with 5-bit precision crossbar arrays or lower. When compared with strategies that use two elements to represent a weight, ACM achieves comparable training accuracies, while also offering area and read energy reductions of 2.3x and 7x, respectively. ACM also has a mild regularization effect that improves inference accuracy in crossbar arrays without any retraining or costly device/variation-aware training.
In-memory computing is an emerging non-von Neumann computing paradigm where certain computational tasks are performed in memory by exploiting the physical attributes of the memory devices. Memristive devices such as phase-change memory (PCM), where information is stored in terms of their conductance levels, are especially well suited for in-memory computing. In particular, memristive devices, when organized in a crossbar configuration can be used to perform matrix-vector multiply operations by exploiting Kirchhoffs circuit laws. To explore the feasibility of such in-memory computing cores in applications such as deep learning as well as for system-level architectural exploration, it is highly desirable to develop an accurate hardware emulator that captures the key physical attributes of the memristive devices. Here, we present one such emulator for PCM and experimentally validate it using measurements from a PCM prototype chip. Moreover, we present an application of the emulator for neural network inference where our emulator can capture the conductance evolution of approximately 400,000 PCM devices remarkably well.
The memristive crossbar aims to implement analog weighted neural network, however, the realistic implementation of such crossbar arrays is not possible due to limited switching states of memristive devices. In this work, we propose the design of an analog deep neural network with binary weight update through backpropagation algorithm using binary state memristive devices. We show that such networks can be successfully used for image processing task and has the advantage of lower power consumption and small on-chip area in comparison with digital counterparts. The proposed network was benchmarked for MNIST handwritten digits recognition achieving an accuracy of approximately 90%.
The rapidly expanding hardware-intrinsic security primitives are aimed at addressing significant security challenges of a massively interconnected world in the age of information technology. The main idea of such primitives is to employ instance-specific process-induced variations in electronic hardware as a source of cryptographic data. Among the emergent technologies, memristive devices provide unique opportunities for security applications due to the underlying stochasticity in their operation. Herein, we report a prototype of a robust, dense, and reconfigurable physical unclonable function primitives based on the three-dimensional passive metal-oxide memristive crossbar circuits, by making positive use of process-induced variations in the devices nonlinear I-Vs and their analog tuning. We first characterize security metrics for a basic building block of the security primitives based on a two layer stack with monolithically integrated 10x10 250-nm half-pitch memristive crossbar circuits. The experimental results show that the average uniformity and diffusivity, measured on a random sample of 6,000 64-bit responses, out of ~697,000 total, is close to ideal 50% with 5% standard deviation for both metrics. The uniqueness, which was evaluated on a smaller sample by readjusting conductances of crosspoint devices within the same crossbar, is also close to the ideal 50% +/- 1%, while the smallest bit error rate, i.e. reciprocal of reliability, measured over 30-day window under +/-20% power supply variations, was ~ 1.5% +/- 1%. We then utilize multiple instances of the basic block to demonstrate physically unclonable functional primitive with 10-bit hidden challenge generation that encodes more than 10^19 challenge response pairs and has comparable uniformity, diffusiveness, and bit error rate.
The analog nature of computing in Memristive crossbars poses significant issues due to various non-idealities such as: parasitic resistances, non-linear I-V characteristics of the device etc. The non-idealities can have a detrimental impact on the functionality i.e. computational accuracy of crossbars. Past works have explored modeling the non-idealities using analytical techniques. However, several non-idealities have data dependent behavior. This can not be captured using analytical (non data-dependent) models thereby, limiting their suitability in predicting application accuracy. To address this, we propose a Generalized Approach to Emulating Non-Ideality in Memristive Crossbars using Neural Networks (GENIEx), which accurately captures the data-dependent nature of non-idealities. We perform extensive HSPICE simulations of crossbars with different voltage and conductance combinations. Following that, we train a neural network to learn the transfer characteristics of the non-ideal crossbar. Next, we build a functional simulator which includes key architectural facets such as textit{tiling}, and textit{bit-slicing} to analyze the impact of non-idealities on the classification accuracy of large-scale neural networks. We show that GENIEx achieves textit{low} root mean square errors (RMSE) of $0.25$ and $0.7$ for low and high voltages, respectively, compared to HSPICE. Additionally, the GENIEx errors are $7times$ and $12.8times$ better than an analytical model which can only capture the linear non-idealities. Further, using the functional simulator and GENIEx, we demonstrate that an analytical model can overestimate the degradation in classification accuracy by $ge 10%$ on CIFAR-100 and $3.7%$ on ImageNet datasets compared to GENIEx.