من المعروف أن حساسية النماذج العميقة العصبية لضوضاء الإدخال مشكلة صعبة.في NLP، يتدهور أداء النموذج غالبا مع الضوضاء التي تحدث بشكل طبيعي، مثل الأخطاء الإملائية.لتخفيف هذه المشكلة، قد تستفيد النماذج البيانات الوكيل بشكل مصطنع.ومع ذلك، تم تحديد كمية ونوع الضوضاء التي تم إنشاؤها حتى الآن بشكل تعسفي.لذلك نقترح نموذج الأخطاء الإحصائية من كورسا - تصحيح الأخطاء النحوية.نقدم تقييم شامل للعديد من متواك أنظمة NLP الحديثة لغات متعددة، مع المهام بما في ذلك التحليل المورفو النحوي، التعرف على الكيان المسمى، الترجمة الآلية العصبية، مجموعة فرعية من مرجع الغراء والفهم القراءة.نحن نقارن أيضا مناهضين لمعالجة انخفاض الأداء: أ) تدريب نماذج NLP مع البيانات الوكيل التي تم إنشاؤها بواسطة إطار عملائنا؛و ب) تقليل ضوضاء الإدخال بالنظام الخارجي لتصحيح اللغة الطبيعية.يتم إصدار الرمز في https://github.com/ufal/kazitext.
Sensitivity of deep-neural models to input noise is known to be a challenging problem. In NLP, model performance often deteriorates with naturally occurring noise, such as spelling errors. To mitigate this issue, models may leverage artificially noised data. However, the amount and type of generated noise has so far been determined arbitrarily. We therefore propose to model the errors statistically from grammatical-error-correction corpora. We present a thorough evaluation of several state-of-the-art NLP systems' robustness in multiple languages, with tasks including morpho-syntactic analysis, named entity recognition, neural machine translation, a subset of the GLUE benchmark and reading comprehension. We also compare two approaches to address the performance drop: a) training the NLP models with noised data generated by our framework; and b) reducing the input noise with external system for natural language correction. The code is released at https://github.com/ufal/kazitext.
References used
https://aclanthology.org/
User-generated texts include various types of stylistic properties, or noises. Such texts are not properly processed by existing morpheme analyzers or language models based on formal texts such as encyclopedias or news articles. In this paper, we pro
With the increasing use of machine-learning driven algorithmic judgements, it is critical to develop models that are robust to evolving or manipulated inputs. We propose an extensive analysis of model robustness against linguistic variation in the se
As hate speech spreads on social media and online communities, research continues to work on its automatic detection. Recently, recognition performance has been increasing thanks to advances in deep learning and the integration of user features. This
Neural generative dialogue agents have shown an increasing ability to hold short chitchat conversations, when evaluated by crowdworkers in controlled settings. However, their performance in real-life deployment -- talking to intrinsically-motivated u
WARNING: This article contains contents that may offend the readers. Strategies that insert intentional noise into text when posting it are commonly observed in the online space, and sometimes they aim to let only certain community users understand t