Do you want to publish a course? Click here

Approximate Probabilistic Neural Networks with Gated Threshold Logic

162   0   0.0 ( 0 )
 Added by Alex James Dr
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Probabilistic Neural Network (PNN) is a feed-forward artificial neural network developed for solving classification problems. This paper proposes a hardware implementation of an approximated PNN (APNN) algorithm in which the conventional exponential function of the PNN is replaced with gated threshold logic. The weights of the PNN are approximated using a memristive crossbar architecture. In particular, the proposed algorithm performs normalization of the training weights, and quantization into 16 levels which significantly reduces the complexity of the circuit.

rate research

Read More

Recent years have witnessed the great success of deep neural networks in many research areas. The fundamental idea behind the design of most neural networks is to learn similarity patterns from data for prediction and inference, which lacks the ability of logical reasoning. However, the concrete ability of logical reasoning is critical to many theoretical and practical problems. In this paper, we propose Neural Logic Network (NLN), which is a dynamic neural architecture that builds the computational graph according to input logical expressions. It learns basic logical operations as neural modules, and conducts propositional logical reasoning through the network for inference. Experiments on simulated data show that NLN achieves significant performance on solving logical equations. Further experiments on real-world data show that NLN significantly outperforms state-of-the-art models on collaborative filtering and personalized recommendation tasks.
Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.
The recent demonstration of current-driven magnetic domain wall logic [Z. Luo et al., Nature 579:214] was based on a three-input logic gate that was identified as a reconfigurable NAND/NOR function. We reinterpret this logic gate as a minority gate within the context of threshold logic, enabling a domain wall threshold logic paradigm in which the device count can be reduced by 80%. Furthermore, by extending the logic gate to more than three inputs of non-equal weight, an 87% reduction in device count can be achieved.
This paper proposes the implementation of programmable threshold logic gate (TLG) crossbar array based on modified TLG cells for high speed processing and computation. The proposed TLG array operation does not depend on input signal and time pulses, comparing to the existing architectures. The circuit is implemented using TSMC $180nm$ CMOS technology. The on-chip area and power dissipation of the simulated $3times 4$ TLG array is $1463 mu m^2$ and $425 mu W$, respectively.
We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا