إسناد التأليف هو مهمة تعيين وثيقة غير معروفة إلى مؤلف من مجموعة من المرشحين.في الماضي، تستخدم الدراسات في هذا المجال مجموعات بيانات التقييم المختلفة لإظهار فعالية الخطوات والميزات والنماذج مسبقا.ومع ذلك، فإن جزء صغير فقط من الأعمال يستخدم أكثر من مجموعة بيانات لإثبات المطالبات.في هذه الورقة، نقدم مجموعة من مجموعات بيانات إيسبت الأمعاء المتنوعة للغاية، والتي تعميم نتائج التقييم بشكل أفضل من أبحاث إسناد التأليف.علاوة على ذلك، نقوم بتنفيذ مجموعة واسعة من نماذج تعلم الآلات المستخدمة سابقا وإظهار أن العديد من النهج تظهر عروضا مختلفة بشكل كبير عند تطبيقها على مجموعات بيانات مختلفة.ندرج نماذج لغة مدربة مسبقا، لأول مرة اختبرها في هذا المجال بطريقة منهجية.أخيرا، نقترح مجموعة من الدرجات المجمعة لتقييم جوانب مختلفة من جمع البيانات.
Authorship attribution is the task of assigning an unknown document to an author from a set of candidates. In the past, studies in this field use various evaluation datasets to demonstrate the effectiveness of preprocessing steps, features, and models. However, only a small fraction of works use more than one dataset to prove claims. In this paper, we present a collection of highly diverse authorship attribution datasets, which better generalizes evaluation results from authorship attribution research. Furthermore, we implement a wide variety of previously used machine learning models and show that many approaches show vastly different performances when applied to different datasets. We include pre-trained language models, for the first time testing them in this field in a systematic way. Finally, we propose a set of aggregated scores to evaluate different aspects of the dataset collection.
References used
https://aclanthology.org/
Recent research has documented that results reported in frequently-cited authorship attribution papers are difficult to reproduce. Inaccessible code and data are often proposed as factors which block successful reproductions. Even when original mater
Cross-language authorship attribution is the challenging task of classifying documents by bilingual authors where the training documents are written in a different language than the evaluation documents. Traditional solutions rely on either translati
Human conversations naturally evolve around different topics and fluently move between them. In research on dialog systems, the ability to actively and smoothly transition to new topics is often ignored. In this paper we introduce TIAGE, a new topic-
As NLP systems become better at detecting opinions and beliefs from text, it is important to ensure not only that models are accurate but also that they arrive at their predictions in ways that align with human reasoning. In this work, we present a m
Determining whether two documents were composed by the same author, also known as authorship verification, has traditionally been tackled using statistical methods. Recently, authorship representations learned using neural networks have been found to