نقدم الدخول الفائز إلى مهمة مشتركة من التطبيع المعجمي متعدد اللغات (Multilexnorm) في W-Nut 2021 (Van Der Goot et al.، 2021A)، والتي تقيم أنظمة التطبيع المعجمي في 12 مجموعة بيانات وسائل التواصل الاجتماعي في 11 لغة.نقوم بتأسيس حلنا على نموذج لغة بايت مدروس مسبقا، BYT5 (Xue et al.، 2021A)، والتي ندرجها مسبقا على البيانات الاصطناعية ثم تناغم بشكل جيد على بيانات التطبيع الأصيل.يحقق نظامنا أفضل أداء بهامش واسع في التقييم الجوهري، وأيضا أفضل أداء في التقييم الخارجي من خلال تحليل التبعية.يتم إصدار شفرة المصدر في https://github.com/ufal/multilexnorm2021 والنماذج الدقيقة في https://huggingface.co/ufal.
We present the winning entry to the Multilingual Lexical Normalization (MultiLexNorm) shared task at W-NUT 2021 (van der Goot et al., 2021a), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. We base our solution on a pre-trained byte-level language model, ByT5 (Xue et al., 2021a), which we further pre-train on synthetic data and then fine-tune on authentic normalization data. Our system achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. The source code is released at https://github.com/ufal/multilexnorm2021 and the fine-tuned models at https://huggingface.co/ufal.
References used
https://aclanthology.org/
Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for soci
The task of converting a nonstandard text to a standard and readable text is known as lexical normalization. Almost all the Natural Language Processing (NLP) applications require the text data in normalized form to build quality task-specific models.
Social media is notoriously difficult to process for existing natural language processing tools, because of spelling errors, non-standard words, shortenings, non-standard capitalization and punctuation. One method to circumvent these issues is to nor
This paper describes the HEL-LJU submissions to the MultiLexNorm shared task on multilingual lexical normalization. Our system is based on a BERT token classification preprocessing step, where for each token the type of the necessary transformation i
Evaluating the complexity of a target word in a sentential context is the aim of the Lexical Complexity Prediction task at SemEval-2021. This paper presents the system created to assess single words lexical complexity, combining linguistic and psycho