النماذج الخاضعة للإشراف المستمرة تحظى بشعبية كبيرة بالنسبة لاستخراج العلاقة لأنه يمكننا الحصول على كمية كبيرة من البيانات التدريبية باستخدام طريقة الإشراف البعيدة دون شرح بشري.في الإشراف البعيد، تعتبر الجملة بمثابة مصدر Tuple إذا كانت الجملة تحتوي على كيانا من Tuple.ومع ذلك، فإن هذه الحالة متساهلة للغاية ولا يضمن وجود معلومات خاصة بالعلاقة ذات الصلة في الجملة.على هذا النحو، تحتوي بيانات التدريب الإشراف على الكثير من الضوضاء التي تؤثر سلبا على أداء النماذج.في هذه الورقة، نقترح آلية تصفية الفرقة الذاتية لتصفية العينات الصاخبة أثناء عملية التدريب.نقيم إطار عملنا المقترح في مجموعة بيانات نيويورك تايمز التي تم الحصول عليها عبر إشراف بعيد.تجاربنا مع العديد من نماذج استخراج العلاقات العصبية متعددة الحديثة تظهر أن آلية التصفية المقترحة تعمل على تحسين متانة النماذج ويزيد من درجات F1 الخاصة بهم.
Distantly supervised models are very popular for relation extraction since we can obtain a large amount of training data using the distant supervision method without human annotation. In distant supervision, a sentence is considered as a source of a tuple if the sentence contains both entities of the tuple. However, this condition is too permissive and does not guarantee the presence of relevant relation-specific information in the sentence. As such, distantly supervised training data contains much noise which adversely affects the performance of the models. In this paper, we propose a self-ensemble filtering mechanism to filter out the noisy samples during the training process. We evaluate our proposed framework on the New York Times dataset which is obtained via distant supervision. Our experiments with multiple state-of-the-art neural relation extraction models show that our proposed filtering mechanism improves the robustness of the models and increases their F1 scores.
References used
https://aclanthology.org/
In relation extraction, distant supervision is widely used to automatically label a large-scale training dataset by aligning a knowledge base with unstructured text. Most existing studies in this field have assumed there is a great deal of centralize
To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples. Existing self-training methods suffer from the gradual drift p
We propose a multi-task, probabilistic approach to facilitate distantly supervised relation extraction by bringing closer the representations of sentences that contain the same Knowledge Base pairs. To achieve this, we bias the latent space of senten
Distantly supervised named entity recognition (DS-NER) efficiently reduces labor costs but meanwhile intrinsically suffers from the label noise due to the strong assumption of distant supervision. Typically, the wrongly labeled instances comprise num
Distantly supervised relation extraction is widely used in the construction of knowledge bases due to its high efficiency. However, the automatically obtained instances are of low quality with numerous irrelevant words. In addition, the strong assump