Do you want to publish a course? Click here

In modern natural language processing pipelines, it is common practice to pretrain'' a generative language model on a large corpus of text, and then to finetune'' the created representations by continuing to train them on a discriminative textual inf erence task. However, it is not immediately clear whether the logical meaning necessary to model logical entailment is captured by language models in this paradigm. We examine this pretrain-finetune recipe with language models trained on a synthetic propositional language entailment task, and present results on test sets probing models' knowledge of axioms of first order logic.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا