Do you want to publish a course? Click here

The Logical Principles of Grammatical Induction

الأسس المنطقية للاستقراء النحوي

1085   0   96   0 ( 0 )
 Publication date 2017
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

Since language is a natural concrete phenomenon, it became a fact that language has been a matter of induction by making it go through experiment in attempt to attain the rules that can take hold of the language's partial phenomena and organize them in general regulations, and if we observed the linguistic matter which grammarians investigated, we could find that their work involved both the complete induction and the incomplete induction according to Aristotle's induction method, but they disagreed with this method in accordance with the nature of Islamic method of thinking; therefore they had their own distinguishing method of induction.

References used
الأصول, د. تمام حسان, عالم الكتب, القاهرة, 2000
أصول التفكير النحوي, د. أبو المكارم, دار غريب, القاهرة, ط1, 2006
rate research

Read More

In modern natural language processing pipelines, it is common practice to pretrain'' a generative language model on a large corpus of text, and then to finetune'' the created representations by continuing to train them on a discriminative textual inf erence task. However, it is not immediately clear whether the logical meaning necessary to model logical entailment is captured by language models in this paradigm. We examine this pretrain-finetune recipe with language models trained on a synthetic propositional language entailment task, and present results on test sets probing models' knowledge of axioms of first order logic.
The syntax is the spirit of language , the core of its movement, it is living heart. And it is way of interpretation which is a pure mental manner . Taken by Arab Grammarians, so they gave it great effort in order to interpret the syntactic and morp hological bases . The following research considers the most important views of the old and modern syntax-specialists as starting point of studying the syntax – interpretations at two levels : the first Level contains concept and approach, It monitors the concept of reasoning in as, and the implications at rules of this term, the statement its motive which is based language in use . and the second level is referential . It demands its elements and the views of Arab syntax in it , and it clears its rules which was established tell the six migration century .
Grammatical gender may be determined by semantics, orthography, phonology, or could even be arbitrary. Identifying patterns in the factors that govern noun genders can be useful for language learners, and for understanding innate linguistic sources o f gender bias. Traditional manual rule-based approaches may be substituted by more accurate and scalable but harder-to-interpret computational approaches for predicting gender from typological information. In this work, we propose interpretable gender classification models for French, which obtain the best of both worlds. We present high accuracy neural approaches which are augmented by a novel global surrogate based approach for explaining predictions. We introduce auxiliary attributes' to provide tunable explanation complexity.
Recently, language models (LMs) have achieved significant performance on many NLU tasks, which has spurred widespread interest for their possible applications in the scientific and social area. However, LMs have faced much criticism of whether they a re truly capable of reasoning in NLU. In this work, we propose a diagnostic method for first-order logic (FOL) reasoning with a new proposed benchmark, LogicNLI. LogicNLI is an NLI-style dataset that effectively disentangles the target FOL reasoning from commonsense inference and can be used to diagnose LMs from four perspectives: accuracy, robustness, generalization, and interpretability. Experiments on BERT, RoBERTa, and XLNet, have uncovered the weaknesses of these LMs on FOL reasoning, which motivates future exploration to enhance the reasoning ability.
Accurately dealing with any type of ambiguity is a major task in Natural Language Processing, with great advances recently reached due to the development of context dependent language models and the use of word or sentence embeddings. In this context , our work aimed at determining how the popular language representation model BERT handle ambiguity of nouns in grammatical number and gender in different languages. We show that models trained on one specific language achieve better results for the disambiguation process than multilingual models. Also, ambiguity is generally better dealt with in grammatical number than it is in grammatical gender, reaching greater distance values from one to another in direct comparisons of individual senses. The overall results show also that the amount of data needed for training monolingual models as well as application should not be underestimated.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا