نحن تصف مهمة Sigmorphon الثانية على التورفولوجيا غير المدعومة: الهدف من المهمة المشتركة SIGMORPHON 2021 على تجميع النماذج المورفولوجية غير المزدئة غير المنشأة هو أنواع الكلمات العنقودية من كوربوس نص الخام إلى النماذج.تحقيقا لهذه الغاية، نطلق سرورا لمدة 5 لغات تطوير و 9 لغات اختبار، وكذلك النماذج الجزئية الذهبية للتقييم.نتلقى 14 تقريرا من 4 فرق تتبع الاستراتيجيات المختلفة، ويستند أفضل نظام أداء على قواعد النحوية.تختلف النتائج بشكل كبير عبر اللغات.ومع ذلك، فإن جميع الأنظمة متفوقة من قبل Lemmmmmmmatizer تحت إشراف، مما يعني أنه لا يزال هناك مجال للتحسين.
We describe the second SIGMORPHON shared task on unsupervised morphology: the goal of the SIGMORPHON 2021 Shared Task on Unsupervised Morphological Paradigm Clustering is to cluster word types from a raw text corpus into paradigms. To this end, we release corpora for 5 development and 9 test languages, as well as gold partial paradigms for evaluation. We receive 14 submissions from 4 teams that follow different strategies, and the best performing system is based on adaptor grammars. Results vary significantly across languages. However, all systems are outperformed by a supervised lemmatizer, implying that there is still room for improvement.
References used
https://aclanthology.org/
We report the results of the WMT 2021 shared task on Quality Estimation, where the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels. This edition focused on two main novel additio
This work describes the Edinburgh submission to the SIGMORPHON 2021 Shared Task 2 on unsupervised morphological paradigm clustering. Given raw text input, the task was to assign each token to a cluster with other tokens from the same paradigm. We use
The machine translation efficiency task challenges participants to make their systems faster and smaller with minimal impact on translation quality. How much quality to sacrifice for efficiency depends upon the application, so participants were encou
We present the BME submission for the SIGMORPHON 2021 Task 0 Part 1, Generalization Across Typologically Diverse Languages shared task. We use an LSTM encoder-decoder model with three step training that is first trained on all languages, then fine-tu
Morphological rules with various levels of specificity can be learned from example lexemes by recursive application of minimal generalization (Albright and Hayes, 2002, 2003).A model that learns rules solely through minimal generalization was used to