Do you want to publish a course? Click here

Programmable Memristive Threshold Logic Gate Array

155   0   0.0 ( 0 )
 Added by Alex James Dr
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

This paper proposes the implementation of programmable threshold logic gate (TLG) crossbar array based on modified TLG cells for high speed processing and computation. The proposed TLG array operation does not depend on input signal and time pulses, comparing to the existing architectures. The circuit is implemented using TSMC $180nm$ CMOS technology. The on-chip area and power dissipation of the simulated $3times 4$ TLG array is $1463 mu m^2$ and $425 mu W$, respectively.



rate research

Read More

For decades, advances in electronics were directly driven by the scaling of CMOS transistors according to Moores law. However, both the CMOS scaling and the classical computer architecture are approaching fundamental and practical limits, and new computing architectures based on emerging devices, such as resistive random-access memory (RRAM) devices, are expected to sustain the exponential growth of computing capability. Here we propose a novel memory-centric, reconfigurable, general purpose computing platform that is capable of handling the explosive amount of data in a fast and energy-efficient manner. The proposed computing architecture is based on a uniform, physical, resistive, memory-centric fabric that can be optimally reconfigured and utilized to perform different computing and data storage tasks in a massively parallel approach. The system can be tailored to achieve maximal energy efficiency based on the data flow by dynamically allocating the basic computing fabric for storage, arithmetic, and analog computing including neuromorphic computing tasks.
Computational ghost imaging is a promising technique for single-pixel imaging because it is robust to disturbance and can be operated over broad wavelength bands, unlike common cameras. However, one disadvantage of this method is that it has a long calculation time for image reconstruction. In this paper, we have designed a dedicated calculation circuit that accelerated the process of computational ghost imaging. We implemented this circuit by using a field-programmable gate array, which reduced the calculation time for the circuit compared to a CPU. The dedicated circuit reconstructs images at a frame rate of 300 Hz.
We review some of the basic principles, fundamentals, technologies, architectures and recent advances leading to thefor the implementation of Field Programmable Photonic Field Arrays (FPPGAs).
Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristor-based Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMAs microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMAs components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of $577~GOPS/s/mm^2$ and $837~GOPS/s/W$, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to $2,446times$ energy and $66times$ latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.
Probabilistic Neural Network (PNN) is a feed-forward artificial neural network developed for solving classification problems. This paper proposes a hardware implementation of an approximated PNN (APNN) algorithm in which the conventional exponential function of the PNN is replaced with gated threshold logic. The weights of the PNN are approximated using a memristive crossbar architecture. In particular, the proposed algorithm performs normalization of the training weights, and quantization into 16 levels which significantly reduces the complexity of the circuit.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا