Do you want to publish a course? Click here

Artificial Neural Network for Performance Modeling and Optimization of CMOS Analog Circuits

194   0   0.0 ( 0 )
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

This paper presents an implementation of multilayer feed forward neural networks (NN) to optimize CMOS analog circuits. For modeling and design recently neural network computational modules have got acceptance as an unorthodox and useful tool. To achieve high performance of active or passive circuit component neural network can be trained accordingly. A well trained neural network can produce more accurate outcome depending on its learning capability. Neural network model can replace empirical modeling solutions limited by range and accuracy.[2] Neural network models are easy to obtain for new circuits or devices which can replace analytical methods. Numerical modeling methods can also be replaced by neural network model due to their computationally expansive behavior.[2][10][20]. The pro- posed implementation is aimed at reducing resource requirement, without much compromise on the speed. The NN ensures proper functioning by assigning the appropriate inputs, weights, biases, and excitation function of the layer that is currently being computed. The concept used is shown to be very effective in reducing resource requirements and enhancing speed.



rate research

Read More

The neuromorphic BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits as well as two versatile digital microprocessors. Primarily designed to emulate spiking neural networks, the system can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks. Analog multiplication is carried out in the synapse circuits, while the results are accumulated on the neurons membrane capacitors. Designed as an analog, in-memory computing device, it promises high energy efficiency. Fixed-pattern noise and trial-to-trial variations, however, require the implemented networks to cope with a certain level of perturbations. Further limitations are imposed by the digital resolution of the input values (5 bit), matrix weights (6 bit) and resulting neuron activations (8 bit). In this paper, we discuss BrainScaleS-2 as an analog inference accelerator and present calibration as well as optimization strategies, highlighting the advantages of training with hardware in the loop. Among other benchmarks, we classify the MNIST handwritten digits dataset using a two-dimensional convolution and two dense layers. We reach 98.0% test accuracy, closely matching the performance of the same network evaluated in software.
In this paper an attempt has been made to identify most important human resource factors and propose a diagnostic model based on the back-propagation and connectionist model approaches of artificial neural network (ANN). The focus of the study is on the mobile -communication industry of India. The ANN based approach is particularly important because conventional approaches (such as algorithmic) to the problem solving have their inherent disadvantages. The algorithmic approach is well-suited to the problems that are well-understood and known solution(s). On the other hand the ANNs have learning by example and processing capabilities similar to that of a human brain. ANN has been followed due to its inherent advantage over conversion algorithmic like approaches and having capabilities, training and human like intuitive decision making capabilities. Therefore, this ANN based approach is likely to help researchers and organizations to reach a better solution to the problem of managing the human resource. The study is particularly important as many studies have been carried in developed countries but there is a shortage of such studies in developing nations like India. Here, a model has been derived using connectionist-ANN approach and improved and verified via back-propagation algorithm. This suggested ANN based model can be used for testing the success and failure human factors in any of the communication Industry. Results have been obtained on the basis of connectionist model, which has been further refined by BPNN to an accuracy of 99.99%. Any company to predict failure due to HR factors can directly deploy this model.
237 - Tao Yang , Yadong Wei , Zhijun Tu 2018
The widespread application of artificial neural networks has prompted researchers to experiment with FPGA and customized ASIC designs to speed up their computation. These implementation efforts have generally focused on weight multiplication and signal summation operations, and less on activation functions used in these applications. Yet, efficient hardware implementations of nonlinear activation functions like Exponential Linear Units (ELU), Scaled Exponential Linear Units (SELU), and Hyperbolic Tangent (tanh), are central to designing effective neural network accelerators, since these functions require lots of resources. In this paper, we explore efficient hardware implementations of activation functions using purely combinational circuits, with a focus on two widely used nonlinear activation functions, i.e., SELU and tanh. Our experiments demonstrate that neural networks are generally insensitive to the precision of the activation function. The results also prove that the proposed combinational circuit-based approach is very efficient in terms of speed and area, with negligible accuracy loss on the MNIST, CIFAR-10 and IMAGENET benchmarks. Synopsys Design Compiler synthesis results show that circuit designs for tanh and SELU can save between 3.13-7.69 and 4.45-8:45 area compared to the LUT/memory-based implementations, and can operate at 5.14GHz and 4.52GHz using the 28nm SVT library, respectively. The implementation is available at: https://github.com/ThomasMrY/ActivationFunctionDemo.
There are several indications that brain is organized not on a basis of individual unreliable neurons, but on a micro-circuital scale providing Lego blocks employed to create complex architectures. At such an intermediate scale, the firing activity in the microcircuits is governed by collective effects emerging by the background noise soliciting spontaneous firing, the degree of mutual connections between the neurons, and the topology of the connections. We compare spontaneous firing activity of small populations of neurons adhering to an engineered scaffold with simulations of biologically plausible CMOS artificial neuron populations whose spontaneous activity is ignited by tailored background noise. We provide a full set of flexible and low-power consuming silicon blocks including neurons, excitatory and inhibitory synapses, and both white and pink noise generators for spontaneous firing activation. We achieve a comparable degree of correlation of the firing activity of the biological neurons by controlling the kind and the number of connection among the silicon neurons. The correlation between groups of neurons, organized as a ring of four distinct populations connected by the equivalent of interneurons, is triggered more effectively by adding multiple synapses to the connections than increasing the number of independent point-to-point connections. The comparison between the biological and the artificial systems suggests that a considerable number of synapses is active also in biological populations adhering to engineered scaffolds.
Neuromorphic computing systems overcome the limitations of traditional von Neumann computing architectures. These computing systems can be further improved upon by using emerging technologies that are more efficient than CMOS for neural computation. Recent research has demonstrated memristors and spintronic devices in various neural network designs boost efficiency and speed. This paper presents a biologically inspired fully spintronic neuron used in a fully spintronic Hopfield RNN. The network is used to solve tasks, and the results are compared against those of current Hopfield neuromorphic architectures which use emerging technologies.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا