الحجج عالية الجودة هي جزء أساسي من صنع القرار.توقع جودة الوسيطة تلقائيا هي مهمة معقدة حصلت مؤخرا على الكثير من الاهتمام في تعدين الحجة.ومع ذلك، فإن جهود التوضيحية لهذه المهمة مرتفعة بشكل استثنائي.لذلك، نختبر أساليب التعلم النشطة القائمة على عدم اليقين (AL) على مجموعتين بيانات قوامها الشائعة لتقدير ما إذا كان يمكن تمكين التعلم الكفء للعينة.يدل تقييمنا التجريبي الواسع أن وظائف الاستحواذ القائمة على عدم اليقين لا يمكن أن تتجاوز الدقة التي تم التوصل إليها مع الاستحواذ العشوائي على مجموعات البيانات هذه.
High-quality arguments are an essential part of decision-making. Automatically predicting the quality of an argument is a complex task that recently got much attention in argument mining. However, the annotation effort for this task is exceptionally high. Therefore, we test uncertainty-based active learning (AL) methods on two popular argument-strength data sets to estimate whether sample-efficient learning can be enabled. Our extensive empirical evaluation shows that uncertainty-based acquisition functions can not surpass the accuracy reached with the random acquisition on these data sets.
References used
https://aclanthology.org/
Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion. Current mainstream methods -- neural EA models -- rely on training with seed alignment, i.e., a set of pre-aligned
Neural machine translation (NMT) is sensitive to domain shift. In this paper, we address this problem in an active learning setting where we can spend a given budget on translating in-domain data, and gradually fine-tune a pre-trained out-of-domain N
While the predictive performance of modern statistical dependency parsers relies heavily on the availability of expensive expert-annotated treebank data, not all annotations contribute equally to the training of the parsers. In this paper, we attempt
Active learning has been shown to reduce annotation requirements for numerous natural language processing tasks, including semantic role labeling (SRL). SRL involves labeling argument spans for potentially multiple predicates in a sentence, which mak
Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively. In this work, leveraging the best of both worlds, we prop