حقق التعلم التلوي نجاحا كبيرا في الاستفادة من المعرفة المستفادة التاريخية لتسهيل عملية التعلم المهمة الجديدة.ومع ذلك، فإن تعلم معرفة المهام التاريخية، التي اعتمدتها خوارزميات التعلم التلوي الحالية، قد لا تعميم بشكل جيد للاختبار المهام عندما لا تكون مدعومة جيدا بمهام التدريب.تدرس هذه الورقة مشكلة تصنيف النص المنخفض للموارد ويزيد الفجوة بين مهام اختبار التوطين والاختبار التلوي من خلال الاستفادة من قواعد المعرفة الخارجية.على وجه التحديد، نقترح KGML لإدخال تمثيل إضافي لكل جملة مستفادة من الرسم البياني المعرفي الخاص بالحكم الجملة المستخرجة.توضح التجارب الواسعة على ثلاثة مجموعات بيانات فعالية KGML تحت كلا من إعدادات التكيف والإشراف غير المدفوع.
Meta-learning has achieved great success in leveraging the historical learned knowledge to facilitate the learning process of the new task. However, merely learning the knowledge from the historical tasks, adopted by current meta-learning algorithms, may not generalize well to testing tasks when they are not well-supported by training tasks. This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks by leveraging the external knowledge bases. Specifically, we propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph. The extensive experiments on three datasets demonstrate the effectiveness of KGML under both supervised adaptation and unsupervised adaptation settings.
References used
https://aclanthology.org/
The combination of multilingual pre-trained representations and cross-lingual transfer learning is one of the most effective methods for building functional NLP systems for low-resource languages. However, for extremely low-resource languages without
Natural Language Processing (NLP) is increasingly relying on general end-to-end systems that need to handle many different linguistic phenomena and nuances. For example, a Natural Language Inference (NLI) system has to recognize sentiment, handle num
Short text classification is a fundamental task in natural language processing. It is hard due to the lack of context information and labeled data in practice. In this paper, we propose a new method called SHINE, which is based on graph neural networ
Humans can distinguish new categories very efficiently with few examples, largely due to the fact that human beings can leverage knowledge obtained from relevant tasks. However, deep learning based text classification model tends to struggle to achie
Meta learning aims to optimize the model's capability to generalize to new tasks and domains. Lacking a data-efficient way to create meta training tasks has prevented the application of meta-learning to the real-world few shot learning scenarios. Rec