يركز استخراج علاقات قليلة (FSRE) على الاعتراف بعلاقات جديدة من خلال التعلم مع مجرد حفنة من الحالات المشروح.تم اعتماد التعلم التلوي على نطاق واسع لمثل هذه المهمة، والتي تتدرب على إنشاء مهام قليلة من الرصاص بشكل عشوائي لتعلم تمثيلات بيانات عامة.على الرغم من النتائج المثيرة للإعجاب التي تحققت، لا تزال النماذج الحالية تؤدي دون التفاادم عند التعامل مع مهام FSRE الثابتة، حيث تكون العلاقات محببة ومتشابهة لبعضها البعض.نقول هذا إلى حد كبير لأن النماذج الحالية لا تميز المهام الثابتة من سهلة في عملية التعلم.في هذه الورقة، نقدم نهجا جديدا يعتمد على التعلم المتعاقل الذي يتعلم تمثيلات أفضل من خلال استغلال معلومات الملصقات العلاقة.نحن أيضا تصميم طريقة تسمح للنموذج بتعلم تكيف كيفية التركيز على المهام الثابتة.تجارب على مجموعة بيانات قياسية توضح فعالية طريقتنا.
Few-shot relation extraction (FSRE) focuses on recognizing novel relations by learning with merely a handful of annotated instances. Meta-learning has been widely adopted for such a task, which trains on randomly generated few-shot tasks to learn generic data representations. Despite impressive results achieved, existing models still perform suboptimally when handling hard FSRE tasks, where the relations are fine-grained and similar to each other. We argue this is largely because existing models do not distinguish hard tasks from easy ones in the learning process. In this paper, we introduce a novel approach based on contrastive learning that learns better representations by exploiting relation label information. We further design a method that allows the model to adaptively learn how to focus on hard tasks. Experiments on two standard datasets demonstrate the effectiveness of our method.
References used
https://aclanthology.org/
Document-level event extraction is critical to various natural language processing tasks for providing structured information. Existing approaches by sequential modeling neglect the complex logic structures for long texts. In this paper, we leverage
Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce. Existing works either utilize self-training scheme to generate pseudo labels that will cause the gradual drift problem
In recent years, few-shot models have been applied successfully to a variety of NLP tasks. Han et al. (2018) introduced a few-shot learning framework for relation classification, and since then, several models have surpassed human performance on this
Scientific documents are replete with measurements mentioned in various formats and styles. As such, in a document with multiple quantities and measured entities, the task of associating each quantity to its corresponding measured entity is challengi
We explore few-shot learning (FSL) for relation classification (RC). Focusing on the realistic scenario of FSL, in which a test instance might not belong to any of the target categories (none-of-the-above, [NOTA]), we first revisit the recent popular