ﻻ يوجد ملخص باللغة العربية
Logical relations widely exist in human activities. Human use them for making judgement and decision according to various conditions, which are embodied in the form of emph{if-then} rules. As an important kind of cognitive intelligence, it is prerequisite of representing and storing logical relations rightly into computer systems so as to make automatic judgement and decision, especially for high-risk domains like medical diagnosis. However, current numeric ANN (Artificial Neural Network) models are good at perceptual intelligence such as image recognition while they are not good at cognitive intelligence such as logical representation, blocking the further application of ANN. To solve it, researchers have tried to design logical ANN models to represent and store logical relations. Although there are some advances in this research area, recent works still have disadvantages because the structures of these logical ANN models still dont map more directly with logical relations which will cause the corresponding logical relations cannot be read out from their network structures. Therefore, in order to represent logical relations more clearly by the neural network structure and to read out logical relations from it, this paper proposes a novel logical ANN model by designing the new logical neurons and links in demand of logical representation. Compared with the recent works on logical ANN models, this logical ANN model has more clear corresponding with logical relations using the more direct mapping method herein, thus logical relations can be read out following the connection patterns of the network structure. Additionally, less neurons are used.
We introduce a new dataset of logical entailments for the purpose of measuring models ability to capture and exploit the structure of logical expressions against an entailment prediction task. We use this task to compare a series of architectures whi
Appel and McAllesters step-indexed logical relations have proven to be a simple and effective technique for reasoning about programs in languages with semantically interesting types, such as general recursive types and general reference types. Howeve
Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehen
The AMR (Abstract Meaning Representation) formalism for representing meaning of natural language sentences was not designed to deal with scope and quantifiers. By extending AMR with indices for contexts and formulating constraints on these contexts,
Pitts and Starks $ u$-calculus is a paradigmatic total language for studying the problem of contextual equivalence in higher-order languages with name generation. Models for the $ u$-calculus that validate basic equivalences concerning names may be c