الاعتراف بالكيانات المسماة في استفسارات محرك البحث القصيرة هي مهمة صعبة بسبب معلومات السياق الأضعف مقارنة بالجمل الطويلة.فشلت أنظمة التعرف على الكيان المسماة القياسية (NER) التي يتم تدريبها على الجمل الصحيحة والطويلة بشكل جيد على أداء هذه الاستفسارات بشكل جيد.في هذه الدراسة، نشارك جهودنا نحو إنشاء مجموعة بيانات تنظيفها ومصممة من استفسارات محرك البحث التركية الحقيقية (TR-SEQ) وإدخال ملصق ممت طويل لإرضاء احتياجات محرك البحث.يتم تدريب نظام NER من خلال تطبيق أحدث طريقة التعلم العميقة من أحدث برت إلى البيانات التي تم جمعها وإبلاغ أدائها العالي على استعلامات محرك البحث.علاوة على ذلك، قمنا بمقارنة نتائجنا مع أنظمة NER التركية التي من بين الفن.
Recognizing named entities in short search engine queries is a difficult task due to their weaker contextual information compared to long sentences. Standard named entity recognition (NER) systems that are trained on grammatically correct and long sentences fail to perform well on such queries. In this study, we share our efforts towards creating a cleaned and labeled dataset of real Turkish search engine queries (TR-SEQ) and introduce an extended label set to satisfy the search engine needs. A NER system is trained by applying the state-of-the-art deep learning method BERT to the collected data and its high performance on search engine queries is reported. Moreover, we compare our results with the state-of-the-art Turkish NER systems.
References used
https://aclanthology.org/
Abstract We take a step towards addressing the under- representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition
Nowadays, named entity recognition (NER) achieved excellent results on the standard corpora. However, big issues are emerging with a need for an application in a specific domain, because it requires a suitable annotated corpus with adapted NE tag-set
Current work in named entity recognition (NER) shows that data augmentation techniques can produce more robust models. However, most existing techniques focus on augmenting in-domain data in low-resource scenarios where annotated data is quite limite
Entity Linking (EL) systems have achieved impressive results on standard benchmarks mainly thanks to the contextualized representations provided by recent pretrained language models. However, such systems still require massive amounts of data -- mill
Meta-learning has recently been proposed to learn models and algorithms that can generalize from a handful of examples. However, applications to structured prediction and textual tasks pose challenges for meta-learning algorithms. In this paper, we a