نحن نتطلع إلى تحيز أخذ العينات والقضايا الخارجية في عدد قليل من التعلم عن اكتشاف الحدث، وهو متعقب فرعي لاستخراج المعلومات.نقترح نموذج العلاقات بين المهام التدريبية في التعلم القليل من الرصاص البارز من خلال إدخال نماذج النماذج عبر المهام.ونحن نقترح كذلك فرض اتساق التنبؤ بين المصنفين عبر المهام لجعل النموذج أكثر قوة على القيم المتطرفة.تظهر تجربتنا الواسعة تحسنا ثابتا على ثلاث مجموعات من مجموعات بيانات التعلم قليلة.تشير النتائج إلى أن نموذجنا هو أكثر قوة عند وصف بيانات المسمى لأنواع الأحداث الرواية محدودة.يتوفر شفرة المصدر على http://github.com/laiviet/fsl-proact.
We address the sampling bias and outlier issues in few-shot learning for event detection, a subtask of information extraction. We propose to model the relations between training tasks in episodic few-shot learning by introducing cross-task prototypes. We further propose to enforce prediction consistency among classifiers across tasks to make the model more robust to outliers. Our extensive experiment shows a consistent improvement on three few-shot learning datasets. The findings suggest that our model is more robust when labeled data of novel event types is limited. The source code is available at http://github.com/laiviet/fsl-proact.
References used
https://aclanthology.org/
Event detection has long been troubled by the trigger curse: overfitting the trigger will harm the generalization ability while underfitting it will hurt the detection performance. This problem is even more severe in few-shot scenario. In this paper,
Stance detection on social media can help to identify and understand slanted news or commentary in everyday life. In this work, we propose a new model for zero-shot stance detection on Twitter that uses adversarial learning to generalize across topic
We address the task of automatic hate speech detection for low-resource languages. Rather than collecting and annotating new hate speech data, we show how to use cross-lingual transfer learning to leverage already existing data from higher-resource l
Humans can distinguish new categories very efficiently with few examples, largely due to the fact that human beings can leverage knowledge obtained from relevant tasks. However, deep learning based text classification model tends to struggle to achie
Abstract Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with