على الرغم من تحقيق النتائج المشجعة، غالبا ما يعتقد أن نماذج توليد تعبير التعبير العصبي لا تفتقر إلى الشفافية.بركأنا نماذج اختيار النماذج المرجعية العصبية (RFS) لمعرفة إلى أي مدى يتم تعلم الميزات اللغوية التي تؤثر على شكل RE وأسرها نماذج RFS الحديثة.تظهر نتائج 8 مهام التحقيق أن جميع الميزات المحددة تعلمت إلى حد ما.تعرض المهام التحقيق المتعلقة بالحالة المرجعية والموقف النحوي أعلى أداء.تم تحقيق أدنى أداء من خلال النماذج التحقيقية المصممة للتنبؤ خصائص هيكل الخطاب خارج مستوى الجملة.
Despite achieving encouraging results, neural Referring Expression Generation models are often thought to lack transparency. We probed neural Referential Form Selection (RFS) models to find out to what extent the linguistic features influencing the RE form are learned and captured by state-of-the-art RFS models. The results of 8 probing tasks show that all the defined features were learned to some extent. The probing tasks pertaining to referential status and syntactic position exhibited the highest performance. The lowest performance was achieved by the probing models designed to predict discourse structure properties beyond the sentence level.
References used
https://aclanthology.org/
Generative language models trained on large, diverse corpora can answer questions about a passage by generating the most likely continuation of the passage followed by a question/answer pair. However, accuracy rates vary depending on the type of ques
Multilingual Neural Machine Translation (MNMT) trains a single NMT model that supports translation between multiple languages, rather than training separate models for different languages. Learning a single model can enhance the low-resource translat
Latent alignment objectives such as CTC and AXE significantly improve non-autoregressive machine translation models. Can they improve autoregressive models as well? We explore the possibility of training autoregressive machine translation models with
Sub-tasks of intent classification, such as robustness to distribution shift, adaptation to specific user groups and personalization, out-of-domain detection, require extensive and flexible datasets for experiments and evaluation. As collecting such
Vector representations have become a central element in semantic language modelling, leading to mathematical overlaps with many fields including quantum theory. Compositionality is a core goal for such representations: given representations for wet'