تم اقتراح التعلم التلوي مؤخرا لتعلم النماذج والخوارزميات التي يمكن أن تعميمها من حفنة من الأمثلة.ومع ذلك، فإن تطبيقات التنبؤ الهيكلية والمهام النصية تشكل تحديات لخوارزميات التعلم التلوي.في هذه الورقة، نحن نطبق اثنين من خوارزميات التعلم التلوي، والشبكات النموذجية والزواحف الزواحف، إلى عدد قليل من الرصاص التعرف على الكيان (NER)، بما في ذلك طريقة لإدماج نموذج اللغة قبل التدريب والحقول العشوائية الشرطية (CRF).نقترح خطة توليد المهام لتحويل مجموعات بيانات NER الكلاسيكية إلى إعداد القليل من الرصاص، لكل من التدريب والتقييم.باستخدام ثلاث مجموعات بيانات عامة، نظهر أن خوارزميات التعلم التلوي هذه تفوق خطاس بخبراء ذو صقل معقول.بالإضافة إلى ذلك، نقترح مزيجا جديدا من الشبكات النموذجية والزواحف.
Meta-learning has recently been proposed to learn models and algorithms that can generalize from a handful of examples. However, applications to structured prediction and textual tasks pose challenges for meta-learning algorithms. In this paper, we apply two meta-learning algorithms, Prototypical Networks and Reptile, to few-shot Named Entity Recognition (NER), including a method for incorporating language model pre-training and Conditional Random Fields (CRF). We propose a task generation scheme for converting classical NER datasets into the few-shot setting, for both training and evaluation. Using three public datasets, we show these meta-learning algorithms outperform a reasonable fine-tuned BERT baseline. In addition, we propose a novel combination of Prototypical Networks and Reptile.
References used
https://aclanthology.org/
Nowadays, named entity recognition (NER) achieved excellent results on the standard corpora. However, big issues are emerging with a need for an application in a specific domain, because it requires a suitable annotated corpus with adapted NE tag-set
We tackle the problem of self-training networks for NLU in low-resource environment---few labeled data and lots of unlabeled data. The effectiveness of self-training is a result of increasing the amount of training data while training. Yet it becomes
Abstract We take a step towards addressing the under- representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition
Current work in named entity recognition (NER) shows that data augmentation techniques can produce more robust models. However, most existing techniques focus on augmenting in-domain data in low-resource scenarios where annotated data is quite limite
Entity Linking (EL) systems have achieved impressive results on standard benchmarks mainly thanks to the contextualized representations provided by recent pretrained language models. However, such systems still require massive amounts of data -- mill