Do you want to publish a course? Click here

On the validity of memristor modeling in the neural network literature

334   0   0.0 ( 0 )
 Added by Yuriy Pershin
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

An analysis of the literature shows that there are two types of non-memristive models that have been widely used in the modeling of so-called memristive neural networks. Here, we demonstrate that such models have nothing in common with the concept of memristive elements: they describe either non-linear resistors or certain bi-state systems, which all are devices without memory. Therefore, the results presented in a significant number of publications are at least questionable, if not completely irrelevant to the actual field of memristive neural networks.



rate research

Read More

56 - Ziyi Gong , Paul Munro 2020
Vital to primary visual processing, retinal circuitry shows many similar structures across a very broad array of species, both vertebrate and non-vertebrate, especially functional components such as lateral inhibition. This surprisingly conservative pattern raises a question of how evolution leads to it, and whether there is any alternative that can also prompt helpful preprocessing. Here we design a method using genetic algorithm that, with many degrees of freedom, leads to architectures whose functions are similar to biological retina, as well as effective alternatives that are different in structures and functions. We compare this model to natural evolution and discuss how our framework can come into goal-driven search and sustainable enhancement of neural network models in machine learning.
This paper presents an implementation of multilayer feed forward neural networks (NN) to optimize CMOS analog circuits. For modeling and design recently neural network computational modules have got acceptance as an unorthodox and useful tool. To achieve high performance of active or passive circuit component neural network can be trained accordingly. A well trained neural network can produce more accurate outcome depending on its learning capability. Neural network model can replace empirical modeling solutions limited by range and accuracy.[2] Neural network models are easy to obtain for new circuits or devices which can replace analytical methods. Numerical modeling methods can also be replaced by neural network model due to their computationally expansive behavior.[2][10][20]. The pro- posed implementation is aimed at reducing resource requirement, without much compromise on the speed. The NN ensures proper functioning by assigning the appropriate inputs, weights, biases, and excitation function of the layer that is currently being computed. The concept used is shown to be very effective in reducing resource requirements and enhancing speed.
This paper is concerned with the utilization of deterministically modeled chemical reaction networks for the implementation of (feed-forward) neural networks. We develop a general mathematical framework and prove that the ordinary differential equations (ODEs) associated with certain reaction network implementations of neural networks have desirable properties including (i) existence of unique positive fixed points that are smooth in the parameters of the model (necessary for gradient descent), and (ii) fast convergence to the fixed point regardless of initial condition (necessary for efficient implementation). We do so by first making a connection between neural networks and fixed points for systems of ODEs, and then by constructing reaction networks with the correct associated set of ODEs. We demonstrate the theory by constructing a reaction network that implements a neural network with a smoothed ReLU activation function, though we also demonstrate how to generalize the construction to allow for other activation functions (each with the desirable properties listed previously). As there are multiple types of networks utilized in this paper, we also give a careful introduction to both reaction networks and neural networks, in order to disambiguate the overlapping vocabulary in the two settings and to clearly highlight the role of each networks properties.
146 - Yufeng Hao , Steven Quigley 2017
Recently, FPGA has been increasingly applied to problems such as speech recognition, machine learning, and cloud computation such as the Bing search engine used by Microsoft. This is due to FPGAs great parallel computation capacity as well as low power consumption compared to general purpose processors. However, these applications mainly focus on large scale FPGA clusters which have an extreme processing power for executing massive matrix or convolution operations but are unsuitable for portable or mobile applications. This paper describes research on single-FPGA platform to explore the applications of FPGAs in these fields. In this project, we design a Deep Recurrent Neural Network (DRNN) Language Model (LM) and implement a hardware accelerator with AXI Stream interface on a PYNQ board which is equipped with a XILINX ZYNQ SOC XC7Z020 1CLG400C. The PYNQ has not only abundant programmable logic resources but also a flexible embedded operation system, which makes it suitable to be applied in the natural language processing field. We design the DRNN language model with Python and Theano, train the model on a CPU platform, and deploy the model on a PYNQ board to validate the model with Jupyter notebook. Meanwhile, we design the hardware accelerator with Overlay, which is a kind of hardware library on PYNQ, and verify the acceleration effect on the PYNQ board. Finally, we have found that the DRNN language model can be deployed on the embedded system smoothly and the Overlay accelerator with AXI Stream interface performs at 20 GOPS processing throughput, which constitutes a 70.5X and 2.75X speed up compared to the work in Ref.30 and Ref.31 respectively.
55 - Makoto Itoh 2019
In this paper, we introduce some interesting features of a memristor CNN (Cellular Neural Network). We first show that there is the similarity between the dynamics of memristors and neurons. That is, some kind of flux-controlled memristors can not respond to the sinusoidal voltage source quickly, namely, they can not switch `on rapidly. Furthermore, these memristors have refractory period after switch `on, which means that it can not respond to further sinusoidal inputs until the flux is decreased. We next show that the memristor-coupled two-cell CNN can exhibit chaotic behavior. In this system, the memristors switch `off and `on at irregular intervals, and the two cells are connected when either or both of the memristors switches `on. We then propose the modified CNN model, which can hold a binary output image, even if all cells are disconnected and no signal is supplied to the cell after a certain point of time. However, the modified CNN requires power to maintain the output image, that is, it is volatile. We next propose a new memristor CNN model. It can also hold a binary output state (image), even if all cells are disconnected, and no signal is supplied to the cell, by memristors switching behavior. Furthermore, even if we turn off the power of the system during the computation, it can resume from the previous average output state, since the memristor CNN has functions of both short-term (volatile) memory and long-term (non-volatile) memory. The above suspend and resume feature are useful when we want to save the current state, and continue work later from the previous state. Finally, we show that the memristor CNN can exhibit interesting two-dimensional waves, if an inductor is connected to each memristor CNN cell.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا