ترغب بنشر مسار تعليمي؟ اضغط هنا

A Logical Neural Network Structure With More Direct Mapping From Logical Relations

81   0   0.0 ( 0 )
 نشر من قبل Gang Wang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Gang Wang




اسأل ChatGPT حول البحث

Logical relations widely exist in human activities. Human use them for making judgement and decision according to various conditions, which are embodied in the form of emph{if-then} rules. As an important kind of cognitive intelligence, it is prerequisite of representing and storing logical relations rightly into computer systems so as to make automatic judgement and decision, especially for high-risk domains like medical diagnosis. However, current numeric ANN (Artificial Neural Network) models are good at perceptual intelligence such as image recognition while they are not good at cognitive intelligence such as logical representation, blocking the further application of ANN. To solve it, researchers have tried to design logical ANN models to represent and store logical relations. Although there are some advances in this research area, recent works still have disadvantages because the structures of these logical ANN models still dont map more directly with logical relations which will cause the corresponding logical relations cannot be read out from their network structures. Therefore, in order to represent logical relations more clearly by the neural network structure and to read out logical relations from it, this paper proposes a novel logical ANN model by designing the new logical neurons and links in demand of logical representation. Compared with the recent works on logical ANN models, this logical ANN model has more clear corresponding with logical relations using the more direct mapping method herein, thus logical relations can be read out following the connection patterns of the network structure. Additionally, less neurons are used.



قيم البحث

اقرأ أيضاً

We introduce a new dataset of logical entailments for the purpose of measuring models ability to capture and exploit the structure of logical expressions against an entailment prediction task. We use this task to compare a series of architectures whi ch are ubiquitous in the sequence-processing literature, in addition to a new model class---PossibleWorldNets---which computes entailment as a convolution over possible worlds. Results show that convolutional networks present the wrong inductive bias for this class of problems relative to LSTM RNNs, tree-structured neural networks outperform LSTM RNNs due to their enhanced ability to exploit the syntax of logic, and PossibleWorldNets outperform all benchmarks.
276 - Derek Dreyer 2011
Appel and McAllesters step-indexed logical relations have proven to be a simple and effective technique for reasoning about programs in languages with semantically interesting types, such as general recursive types and general reference types. Howeve r, proofs using step-indexed models typically involve tedious, error-prone, and proof-obscuring step-index arithmetic, so it is important to develop clean, high-level, equational proof principles that avoid mention of step indices. In this paper, we show how to reason about binary step-indexed logical relations in an abstract and elegant way. Specifically, we define a logic LSLR, which is inspired by Plotkin and Abadis logic for parametricity, but also supports recursively defined relations by means of the modal later operator from Appel, Melli`es, Richards, and Vouillons very modal model paper. We encode in LSLR a logical relation for reasoning relationally about programs in call-by-value System F extended with general recursive types. Using this logical relation, we derive a set of useful rules with which we can prove contextual equivalence and approximation results without counting steps.
Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehen sive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.
64 - Johan Bos 2019
The AMR (Abstract Meaning Representation) formalism for representing meaning of natural language sentences was not designed to deal with scope and quantifiers. By extending AMR with indices for contexts and formulating constraints on these contexts, a formalism is derived that makes correct prediction for inferences involving negation and bound variables. The attractive core predicate-argument structure of AMR is preserved. The resulting framework is similar to that of Discourse Representation Theory.
Pitts and Starks $ u$-calculus is a paradigmatic total language for studying the problem of contextual equivalence in higher-order languages with name generation. Models for the $ u$-calculus that validate basic equivalences concerning names may be c onstructed using functor categories or nominal sets, with a dynamic allocation monad used to model computations that may allocate fresh names. If recursion is added to the language and one attempts to adapt the models from (nominal) sets to (nominal) domains, however, the direct-style construction of the allocation monad no longer works. This issue has previously been addressed by using a monad that combines dynamic allocation with continuations, at some cost to abstraction. This paper presents a direct-style model of a $ u$-calculus-like language with recursion using the novel framework of proof-relevant logical relations, in which logical relations also contain objects (or proofs) demonstrating the equivalence of (the semantic counterparts of) programs. Apart from providing a fresh solution to an old problem, this work provides an accessible setting in which to introduce the use of proof-relevant logical relations, free of the additional complexities associated with their use for more sophisticated languages.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا