معظم اللغات الطبيعية لها ترتيب كلمة سائدة أو ثابتة.على سبيل المثال باللغة الإنجليزية، عادة ما يكون ترتيب الكلمة كائن فعال.يحاول هذا العمل شرح هذه الظاهرة بالإضافة إلى النتائج النموذجية الأخرى فيما يتعلق بترتيب الكلمات من منظور وظيفي.على وجه الخصوص، ندرس ما إذا كان ترتيب Word ثابت يوفر ميزة وظيفية، وشرح سبب انتشار هذه اللغات.تحقيقا لهذه الغاية، نفكر في نموذج تطوري من اللغة وإظهار، من الناحية النظرية واستخدام الخوارزميات الوراثية، أن اللغة ذات ترتيب كلمة ثابتة هي الأمثل.نوضح أيضا أن إضافة معلومات إلى الجملة، مثل علامات الحالة وتمييز الإسم العام، تقلل من الحاجة إلى ترتيب الكلمات الثابت، وفقا للنتائج النموذجية.
Most natural languages have a predominant or fixed word order. For example in English the word order is usually Subject-Verb-Object. This work attempts to explain this phenomenon as well as other typological findings regarding word order from a functional perspective. In particular, we examine whether fixed word order provides a functional advantage, explaining why these languages are prevalent. To this end, we consider an evolutionary model of language and demonstrate, both theoretically and using genetic algorithms, that a language with a fixed word order is optimal. We also show that adding information to the sentence, such as case markers and noun-verb distinction, reduces the need for fixed word order, in accordance with the typological findings.
References used
https://aclanthology.org/
Languages evolve over time and the meaning of words can shift. Furthermore, individual words can have multiple senses. However, existing language models often only reflect one word sense per word and do not reflect semantic changes over time. While t
A possible explanation for the impressive performance of masked language model (MLM) pre-training is that such models have learned to represent the syntactic structures prevalent in classical NLP pipelines. In this paper, we propose a different expla
Modern natural language understanding models depend on pretrained subword embeddings, but applications may need to reason about words that were never or rarely seen during pretraining. We show that examples that depend critically on a rarer word are
This paper introduces a novel approach to learn visually grounded meaning representations of words as low-dimensional node embeddings on an underlying graph hierarchy. The lower level of the hierarchy models modality-specific word representations, co
This paper details experiments we performed on the Universal Dependencies 2.7 corpora in order to investigate the dominant word order in the available languages. For this purpose, we used a graph rewriting tool, GREW, which allowed us to go beyond th