من الصعب معالجة وسائل التواصل الاجتماعي لأدوات معالجة اللغة الطبيعية القائمة، بسبب الأخطاء الإملائية، والكلمات غير القياسية، والتقصاصات، والرسملة غير القياسية وعلامات الترقيم.إحدى الطرق للتحايل على هذه المشكلات هي تطبيع بيانات الإدخال قبل المعالجة.ركزت معظم الأعمال السابقة بلغة واحدة فقط، والتي هي في الغالب الإنجليزية.في هذه الورقة، نحن أول من يقترح نموذجا للتطبيع المتبادل، الذي نشارك فيه في مهمة WNUT 2021 المشتركة.تحقيقا لهذه الغاية، نستخدم Monoise كنقطة انطلاق، وإجراء تكييف بسيط للتطبيق عبر اللغات.ينفأ النموذج الخاص بنا المقترح على خط الأساس في الإجازة التي يوفرها المنظمون الذين نسخ المدخلات.علاوة على ذلك، نستكشف نموذجا مختلفا تماما يحول المهمة إلى مهمة وضع علامة تسلسل.أداء هذا النظام الثاني منخفض، لأنه لا يأخذ القيمة في الاعتبار في تنفيذنا.
Social media is notoriously difficult to process for existing natural language processing tools, because of spelling errors, non-standard words, shortenings, non-standard capitalization and punctuation. One method to circumvent these issues is to normalize input data before processing. Most previous work has focused on only one language, which is mostly English. In this paper, we are the first to propose a model for cross-lingual normalization, with which we participate in the WNUT 2021 shared task. To this end, we use MoNoise as a starting point, and make a simple adaptation for cross-lingual application. Our proposed model outperforms the leave-as-is baseline provided by the organizers which copies the input. Furthermore, we explore a completely different model which converts the task to a sequence labeling task. Performance of this second system is low, as it does not take capitalization into account in our implementation.
References used
https://aclanthology.org/
The task of converting a nonstandard text to a standard and readable text is known as lexical normalization. Almost all the Natural Language Processing (NLP) applications require the text data in normalized form to build quality task-specific models.
Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for soci
We present the winning entry to the Multilingual Lexical Normalization (MultiLexNorm) shared task at W-NUT 2021 (van der Goot et al., 2021a), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. We base our solut
Transformers that are pre-trained on multilingual corpora, such as, mBERT and XLM-RoBERTa, have achieved impressive cross-lingual transfer capabilities. In the zero-shot transfer setting, only English training data is used, and the fine-tuned model i
How difficult is it for English-as-a-second language (ESL) learners to read noisy English texts? Do ESL learners need lexical normalization to read noisy English texts? These questions may also affect community formation on social networking sites wh