تقدم هذه الورقة مساهمتنا في تعدين وسائل التواصل الاجتماعي للتطبيقات الصحية المهمة المشتركة 2021. لقد عالجنا جميع المهام الفرعية الثلاثة للمهمة 1: الفرعية (تصنيف التغريدات التي تحتوي على آثار ضارة)، SubTask B (استخراج يمتد النص الذي يحتوي على آثار ضارة) وSubTask C (دقة الآثار الضارة).استكشفنا العديد من نماذج اللغة القائمة على المحولات المدربة مسبقا وركزنا على بنية تدريب متعددة المهام.بالنسبة للسبع الأول، طبقنا أيضا تقنيات تكبير الخصومة وتشكل مجموعة نموذجية من أجل تحسين متانة التنبؤ.مرتبة نظامنا في المرتبة الأولى في SubTask B مع 0.51 F1 درجة، 0.514 الدقة واستدعاء 0.514.للحصول على التراكج الفرعية، حصلنا على درجة 0.44 F1، 0.49 دقة و 0.39 استدعاء و For Subtask C حصلنا على 0.16 F1 درجة مع 0.16 دقة و 0.17 تذكر.
This paper presents our contribution to the Social Media Mining for Health Applications Shared Task 2021. We addressed all the three subtasks of Task 1: Subtask A (classification of tweets containing adverse effects), Subtask B (extraction of text spans containing adverse effects) and Subtask C (adverse effects resolution). We explored various pre-trained transformer-based language models and we focused on a multi-task training architecture. For the first subtask, we also applied adversarial augmentation techniques and we formed model ensembles in order to improve the robustness of the prediction. Our system ranked first at Subtask B with 0.51 F1 score, 0.514 precision and 0.514 recall. For Subtask A we obtained 0.44 F1 score, 0.49 precision and 0.39 recall and for Subtask C we obtained 0.16 F1 score with 0.16 precision and 0.17 recall.
References used
https://aclanthology.org/
Large-scale multi-modal classification aim to distinguish between different multi-modal data, and it has drawn dramatically attentions since last decade. In this paper, we propose a multi-task learning-based framework for the multimodal classificatio
We present CoTexT, a pre-trained, transformer-based encoder-decoder model that learns the representative context between natural language (NL) and programming language (PL). Using self-supervision, CoTexT is pre-trained on large programming language
Aspect Category Sentiment Analysis (ACSA), which aims to identify fine-grained sentiment polarities of the aspect categories discussed in user reviews. ACSA is challenging and costly when conducting it into real-world applications, that mainly due to
We propose a neural event coreference model in which event coreference is jointly trained with five tasks: trigger detection, entity coreference, anaphoricity determination, realis detection, and argument extraction. To guide the learning of this com
Training a robust and reliable deep learning model requires a large amount of data. In the crisis domain, building deep learning models to identify actionable information from the huge influx of data posted by eyewitnesses of crisis events on social