نقترح شبكة الخصومة المولدة المخلوذة التي تعتمد على الانتباه (باسم Magan)، وتطبيقها على تصنيف نية الاقتباس في المنشور العلمي.نقوم باختيار بيانات التدريب الخاصة بالمجال، واقترح آلية اهتمامية مختلطة، وتوظيف بنية شبكة الخصومة التوليدية لنموذج لغة التدريب المسبق والضبط الجميل لمهمة التصنيف متعددة الطبقات المصب.أجريت التجارب على مجموعات البيانات SCICITE لمقارنة الأداء النموذجي.حقق نموذج Magan المقترح أفضل ماكرو - F1 من 0.8532.
We propose the mixed-attention-based Generative Adversarial Network (named maGAN), and apply it for citation intent classification in scientific publication. We select domain-specific training data, propose a mixed-attention mechanism, and employ generative adversarial network architecture for pre-training language model and fine-tuning to the downstream multi-class classification task. Experiments were conducted on the SciCite datasets to compare model performance. Our proposed maGAN model achieved the best Macro-F1 of 0.8532.
References used
https://aclanthology.org/
This paper describes our system (IREL) for 3C-Citation Context Classification shared task of the Scholarly Document Processing Workshop at NAACL 2021. We participated in both subtask A and subtask B. Our best system achieved a Macro F1 score of 0.269
We present our entry into the 2021 3C Shared Task Citation Context Classification based on Purpose competition. The goal of the competition is to classify a citation in a scientific article based on its purpose. This task is important because it coul
Citations are crucial to a scientific discourse. Besides providing additional contexts to research papers, citations act as trackers of the direction of research in a field and as an important measure in understanding the impact of a research publica
Recent work has demonstrated the vulnerability of modern text classifiers to universal adversarial attacks, which are input-agnostic sequences of words added to text processed by classifiers. Despite being successful, the word sequences produced in s
Thanks to the strong representation learning capability of deep learning, especially pre-training techniques with language model loss, dependency parsing has achieved great performance boost in the in-domain scenario with abundant labeled training da