MWP-BERT: A Strong Baseline for Math Word Problems


الملخص بالإنكليزية

Math word problem (MWP) solving is the task of transforming a sequence of natural language problem descriptions to executable math equations. An MWP solver not only needs to understand complex scenarios described in the problem texts, but also identify the key mathematical variables and associate text descriptions with math equation logic. Although recent sequence modeling MWP solvers have gained credits on the math-text contextual understanding, pre-trained language models (PLM) have not been explored for solving MWP, considering that PLM trained over free-form texts is limited in representing text references to mathematical logic. In this work, we introduce MWP-BERT to obtain pre-trained token representations that capture the alignment between text description and mathematical logic. Additionally, we introduce a keyword-based prompt matching method to address the MWPs requiring common-sense knowledge. On a benchmark Math23K dataset and a new Ape210k dataset, we show that MWP-BERT outperforms the strongest baseline model by 5-10% improvement on accuracy.

تحميل البحث