نظرا لأن طرازات لغة واسعة النطاق مدربة مسبقا تحقق دقة على المستوى البشري والأخبار الدقة على مهام فهم اللغة الحالية، دعت التحيز الإحصائي في البيانات القياسية والدراسات التحقيق مؤخرا إلى قدراتهم الحقيقية.للحصول على تقييم أكثر إعلانية من الدقة بشأن مهام تصنيف النص يمكن أن تقدم، نقترح تقييم أنظمة من خلال مقياس جديد لتماسك التنبؤ.نحن نطبق إطار عملنا على اثنين من التفاهم مع المعايير ذات الخصائص المختلفة لإظهار تنوعه.تبين نتائجنا التجريبية أن إطار التقييم هذا، على الرغم من بسيطة في الأفكار والتنفيذ، هو إجراء سريع وفعال وتنوعا لتوفير نظرة ثاقبة في تماسك تنبؤات الآلات.
As large-scale, pre-trained language models achieve human-level and superhuman accuracy on existing language understanding tasks, statistical bias in benchmark data and probing studies have recently called into question their true capabilities. For a more informative evaluation than accuracy on text classification tasks can offer, we propose evaluating systems through a novel measure of prediction coherence. We apply our framework to two existing language understanding benchmarks with different properties to demonstrate its versatility. Our experimental results show that this evaluation framework, although simple in ideas and implementation, is a quick, effective, and versatile measure to provide insight into the coherence of machines' predictions.
References used
https://aclanthology.org/
Many existing approaches for interpreting text classification models focus on providing importance scores for parts of the input text, such as words, but without a way to test or improve the interpretation method itself. This has the effect of compou
The research aims to identify the degree of knowledge and
recruit teachers for thinking strategies beyond the knowledge in the
education of excelling students in Damascus, mentally, knowing the
significance of differences in the degree of their kn
Understanding when a text snippet does not provide a sought after information is an essential part of natural language utnderstanding. Recent work (SQuAD 2.0; Rajpurkar et al., 2018) has attempted to make some progress in this direction by enriching
Quality Estimation (QE) plays an essential role in applications of Machine Translation (MT). Traditionally, a QE system accepts the original source text and translation from a black-box MT system as input. Recently, a few studies indicate that as a b
Dialogue topic segmentation is critical in several dialogue modeling problems. However, popular unsupervised approaches only exploit surface features in assessing topical coherence among utterances. In this work, we address this limitation by leverag