Human-in-the-loop Handling of Knowledge Drift


Abstract in English

We introduce and study knowledge drift (KD), a complex form of drift that occurs in hierarchical classification. Under KD the vocabulary of concepts, their individual distributions, and the is-a relations between them can all change over time. The main challenge is that, since the ground-truth concept hierarchy is unobserved, it is hard to tell apart different forms of KD. For instance, introducing a new is-a relation between two concepts might be confused with individual changes to those concepts, but it is far from equivalent. Failure to identify the right kind of KD compromises the concept hierarchy used by the classifier, leading to systematic prediction errors. Our key observation is that in many human-in-the-loop applications (like smart personal assistants) the user knows whether and what kind of drift occurred recently. Motivated by this, we introduce TRCKD, a novel approach that combines automated drift detection and adaptation with an interactive stage in which the user is asked to disambiguate between different kinds of KD. In addition, TRCKD implements a simple but effective knowledge-aware adaptation strategy. Our simulations show that often a handful of queries to the user are enough to substantially improve prediction performance on both synthetic and realistic data.

Download