تم اقتراح تغييرات مختلفة لإلقاء تحليل التبعية كوسيلة تسلسل وحل المهمة على النحو التالي: (1) مشكلة اختيار الرأس، (II) العثور على تمثيل للأقواس الرمز المميز كسلاسل قوس، أو (3) ربط تسلسل انتقال جزئي من أالمحلل المحلل القائم على الانتقال إلى الكلمات.ومع ذلك، لا يوجد تفاهم ضئيل حول كيفية التصرف هذه الخطية في إعدادات الموارد المنخفضة.هنا، ندرس أولا كفاءة البيانات الخاصة بهم، محاكاة الإعدادات المقيدة بالبيانات من مجموعة متنوعة من Treebanks Result Resource.ثانيا، نختبر ما إذا كانت هذه الاختلافات تظهر في إعدادات الموارد المنخفضة حقا.تظهر النتائج أن ترميزات اختيار الرأس أكثر كفاءة في البيانات وأداء أفضل في إطار مثالي (ذهب)، ولكن هذه الميزة تختفي إلى حد كبير لصالح التنسيقات القوسين عندما يشبه الإعداد قيد التشغيل تكوين الموارد المنخفضة في العالم الحقيقي.
Different linearizations have been proposed to cast dependency parsing as sequence labeling and solve the task as: (i) a head selection problem, (ii) finding a representation of the token arcs as bracket strings, or (iii) associating partial transition sequences of a transition-based parser to words. Yet, there is little understanding about how these linearizations behave in low-resource setups. Here, we first study their data efficiency, simulating data-restricted setups from a diverse set of rich-resource treebanks. Second, we test whether such differences manifest in truly low-resource setups. The results show that head selection encodings are more data-efficient and perform better in an ideal (gold) framework, but that such advantage greatly vanishes in favour of bracketing formats when the running setup resembles a real-world low-resource configuration.
References used
https://aclanthology.org/
Moderation of reader comments is a significant problem for online news platforms. Here, we experiment with models for automatic moderation, using a dataset of comments from a popular Croatian newspaper. Our analysis shows that while comments that vio
Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving perf
The task of converting a nonstandard text to a standard and readable text is known as lexical normalization. Almost all the Natural Language Processing (NLP) applications require the text data in normalized form to build quality task-specific models.
Incorporating lexical knowledge into deep learning models has been proved to be very effective for sequence labeling tasks. However, previous works commonly have difficulty dealing with large-scale dynamic lexicons which often cause excessive matchin
While FrameNet is widely regarded as a rich resource of semantics in natural language processing, a major criticism concerns its lack of coverage and the relative paucity of its labeled data compared to other commonly used lexical resources such as P