Exploring the Connection between Knowledge Distillation and Logits Matching


Abstract in English

Knowledge distillation is a generalized logits matching technique for model compression. Their equivalence is previously established on the condition of $textit{infinity temperature}$ and $textit{zero-mean normalization}$. In this paper, we prove that with only $textit{infinity temperature}$, the effect of knowledge distillation equals to logits matching with an extra regularization. Furthermore, we reveal that an additional weaker condition -- $textit{equal-mean initialization}$ rather than the original $textit{zero-mean normalization}$ already suffices to set up the equivalence. The key to our proof is we realize that in modern neural networks with the cross-entropy loss and softmax activation, the mean of back-propagated gradient on logits always keeps zero.

Download