نقدم Bertweetfr، أول نموذج لغوي مدرب مسبقا على نطاق واسع للتغريدات الفرنسية.يتم تهيئ نموذجنا باستخدام نموذج اللغة الفرنسية المجال للمجال Camembert الذي يتبع بنية Base Bert.تظهر التجارب أن Bertweetfr Outperforms جميع نماذج اللغة الفرنسية العامة في المجال السابق على اثنين من مهام Twitter Twitter من Twitter من تحديد الاجثافية التعرف على الكيان المسمى.تم إنشاء DataSet المستخدمة في مهمة كشف الاجزاسية أولا وشروحة من قبل فريقنا، وملء فجوة هذه البيانات التحليلية في الفرنسية.نجعل نموذجنا متاحا علنا في مكتبة المحولات بهدف تعزيز البحث في المستقبل في المهام التحليلية للتغريدات الفرنسية.
We introduce BERTweetFR, the first large-scale pre-trained language model for French tweets. Our model is initialised using a general-domain French language model CamemBERT which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.
References used
https://aclanthology.org/
While pre-trained language models (PLMs) are the go-to solution to tackle many natural language processing problems, they are still very limited in their ability to capture and to use common-sense knowledge. In fact, even if information is available
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words i
Pretrained language models (PTLMs) yield state-of-the-art performance on many natural language processing tasks, including syntax, semantics and commonsense. In this paper, we focus on identifying to what extent do PTLMs capture semantic attributes a
Cross-domain Named Entity Recognition (NER) transfers the NER knowledge from high-resource domains to the low-resource target domain. Due to limited labeled resources and domain shift, cross-domain NER is a challenging task. To address these challeng
Modern transformer-based language models are revolutionizing NLP. However, existing studies into language modelling with BERT have been mostly limited to English-language material and do not pay enough attention to the implicit knowledge of language,