Explainable Agreement through Simulation for Tasks with Subjective Labels


الملخص بالإنكليزية

The field of information retrieval often works with limited and noisy data in an attempt to classify documents into subjective categories, e.g., relevance, sentiment and controversy. We typically quantify a notion of agreement to understand the difficulty of the labeling task, but when we present final results, we do so using measures that are unaware of agreement or the inherent subjectivity of the task. We propose using user simulation to understand the effect size of this noisy agreement data. By simulating truth and predictions, we can understand the maximum scores a dataset can support: for if a classifier is doing better than a reasonable model of a human, we cannot conclude that it is actually better, but that it may be learning noise present in the dataset. We present a brief case study on controversy detection that concludes that a commonly-used dataset has been exhausted: in order to advance the state-of-the-art, more data must be gathered at the current level of label agreement in order to distinguish between techniques with confidence.

تحميل البحث