Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality


Abstract in English

Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of language models. BPE provides multiple benefits, such as handling the out-of-vocabulary problem and reducing vocabulary sparsity. However, this process is defined from the pre-training data statistics, making the tokenization on different domains susceptible to infrequent spelling sequences (e.g., misspellings as in social media or character-level adversarial attacks). On the other hand, though robust to misspellings, pure character-level models often lead to unreasonably large sequences and make it harder for the model to learn meaningful contiguous characters. We propose a character-based subword module (char2subword) that learns the subword embedding table in pre-trained models like BERT to alleviate these challenges. Our char2subword module builds representations from characters out of the subword vocabulary, and it can be used as a drop-in replacement of the subword embedding table. The module is robust to character-level alterations such as misspellings, word inflection, casing, and punctuation. We integrate it further with BERT through pre-training while keeping BERT transformer parameters fixed. We show our methods effectiveness by outperforming mBERT on the linguistic code-switching evaluation (LinCE) benchmark.

Download