نتيجة للجمل غير المنظمة وبعض أخطاء أخطاء وإجراء أخطاء، فإن العثور على كيانات اسمه في بيئة صاخبة مثل وسائل التواصل الاجتماعي يستغرق المزيد من الجهد.يحتوي Parstwiner على أكثر من 250k Tokens، بناء على تعليمات قياسية مثل MUC-6 أو Conll 2003، تجمع من Twitter الفارسي.باستخدام معامل كابا في كوهين، فإن اتساق المعلقين هو 0.95، درجة عالية.في هذه الدراسة، نوضح أن بعض النماذج الحديثة تتحلل على هذه الشركات، وتدريب نموذج جديد باستخدام تعلم التحويل الموازي بناء على بنية بيرت.تظهر النتائج التجريبية أن النموذج يعمل بشكل جيد في الفارسية غير الرسمية وكذلك في الفارسية الرسمية.
As a result of unstructured sentences and some misspellings and errors, finding named entities in a noisy environment such as social media takes much more effort. ParsTwiNER contains about 250k tokens, based on standard instructions like MUC-6 or CoNLL 2003, gathered from Persian Twitter. Using Cohen's Kappa coefficient, the consistency of annotators is 0.95, a high score. In this study, we demonstrate that some state-of-the-art models degrade on these corpora, and trained a new model using parallel transfer learning based on the BERT architecture. Experimental results show that the model works well in informal Persian as well as in formal Persian.
References used
https://aclanthology.org/
Although pre-trained big models (e.g., BERT, ERNIE, XLNet, GPT3 etc.) have delivered top performance in Seq2seq modeling, their deployments in real-world applications are often hindered by the excessive computations and memory demand involved. For ma
Nested Named Entity Recognition (NNER) has been extensively studied, aiming to identify all nested entities from potential spans (i.e., one or more continuous tokens). However, recent studies for NNER either focus on tedious tagging schemas or utiliz
This paper presents our findings from participating in the SMM4H Shared Task 2021. We addressed Named Entity Recognition (NER) and Text Classification. To address NER we explored BiLSTM-CRF with Stacked Heterogeneous embeddings and linguistic feature
Pretrained language models like BERT have advanced the state of the art for many NLP tasks. For resource-rich languages, one has the choice between a number of language-specific models, while multilingual models are also worth considering. These mode
Abstract We take a step towards addressing the under- representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition