تقدم هذه الورقة نتائجنا من المشاركة في المهمة المشتركة SMM4H 2021. تناولنا التعرف على الكيان المسمى (NER) وتصنيف النص.لمعالجة NER، استكشفنا Bilstm-CRF مع تضمين مخلجان مكدسة وميزات لغوية.حققنا في العديد من خوارزميات التعلم في الآلات (الانحدار اللوجستي، SVM والشبكات العصبية) لمعالجة تصنيف النص.يمكن التعميم مناهجنا المقترحة لغات مختلفة وقد أظهرنا فعاليتها للغة الإنجليزية والإسبانية.حققت تقارير تصنيف النص لدينا أداء تنافسي مع درجة F1 0.46 و 0.90 على تصنيف ADE (المهمة 1A) وتصنيف المهنة (المهمة 7A) على التوالي.في حالة NER، سجلت عمليات التقديمات لدينا درجة F1 من 0.50 و 0.82 على اكتشاف ADE SPAN (المهمة 1B) والكشف عن المهنة (المهمة 7 ب) على التوالي.
This paper presents our findings from participating in the SMM4H Shared Task 2021. We addressed Named Entity Recognition (NER) and Text Classification. To address NER we explored BiLSTM-CRF with Stacked Heterogeneous embeddings and linguistic features. We investigated various machine learning algorithms (logistic regression, SVM and Neural Networks) to address text classification. Our proposed approaches can be generalized to different languages and we have shown its effectiveness for English and Spanish. Our text classification submissions have achieved competitive performance with F1-score of 0.46 and 0.90 on ADE Classification (Task 1a) and Profession Classification (Task 7a) respectively. In the case of NER, our submissions scored F1-score of 0.50 and 0.82 on ADE Span Detection (Task 1b) and Profession span detection (Task 7b) respectively.
References used
https://aclanthology.org/
As a result of unstructured sentences and some misspellings and errors, finding named entities in a noisy environment such as social media takes much more effort. ParsTwiNER contains about 250k tokens, based on standard instructions like MUC-6 or CoN
Although pre-trained big models (e.g., BERT, ERNIE, XLNet, GPT3 etc.) have delivered top performance in Seq2seq modeling, their deployments in real-world applications are often hindered by the excessive computations and memory demand involved. For ma
Nested Named Entity Recognition (NNER) has been extensively studied, aiming to identify all nested entities from potential spans (i.e., one or more continuous tokens). However, recent studies for NNER either focus on tedious tagging schemas or utiliz
To be able to share the valuable information in electronic patient records (EPR) they first need to be de-identified in order to protect the privacy of their subjects. Named entity recognition and classification (NERC) is an important part of this pr
Pretrained language models like BERT have advanced the state of the art for many NLP tasks. For resource-rich languages, one has the choice between a number of language-specific models, while multilingual models are also worth considering. These mode