نقترح نموذج فرقة للتنبؤ بالتعقيد المعجمي للكلمات وتعبيرات متعددة الكلمات (MWES).يتلقى النموذج كإدخال جملة بكلمة مستهدفة أو MWE وتخرج درجة التعقيد.بالنظر إلى أن التحدي الرئيسي مع هذه المهمة هو الحجم المحدود للبيانات المشروح، يعتمد نموذجنا على تمثيلات السياقية المحددة مسبقا من نماذج اللغة القائمة على المحولات المختلفة (IE، Bert and Roberta)، وعلى مجموعة متنوعة منطرق التدريب لمزيد من تعزيز التعميم النموذجي والترويج: التعلم متعدد الخطوات من الترابط والتعلم متعدد المهام، والتدريب الخصم.بالإضافة إلى ذلك، نقترح إثراء التمثيلات السياقية بإضافة ميزات مصنوعة يدوية أثناء التدريب.حقق نموذجنا نتائج تنافسية ومرتبة بين أنظمة أفضل 10 في كلتا المهام الفرعية.
We propose an ensemble model for predicting the lexical complexity of words and multiword expressions (MWEs). The model receives as input a sentence with a target word or MWE and outputs its complexity score. Given that a key challenge with this task is the limited size of annotated data, our model relies on pretrained contextual representations from different state-of-the-art transformer-based language models (i.e., BERT and RoBERTa), and on a variety of training methods for further enhancing model generalization and robustness: multi-step fine-tuning and multi-task learning, and adversarial training. Additionally, we propose to enrich contextual representations by adding hand-crafted features during training. Our model achieved competitive results and ranked among the top-10 systems in both sub-tasks.
References used
https://aclanthology.org/
This paper describes team LCP-RIT's submission to the SemEval-2021 Task 1: Lexical Complexity Prediction (LCP). The task organizers provided participants with an augmented version of CompLex (Shardlow et al., 2020), an English multi-domain dataset in
This paper presents the results and main findings of SemEval-2021 Task 1 - Lexical Complexity Prediction. We provided participants with an augmented version of the CompLex Corpus (Shardlow et al. 2020). CompLex is an English multi-domain corpus in wh
Lexical complexity prediction (LCP) conveys the anticipation of the complexity level of a token or a set of tokens in a sentence. It plays a vital role in the improvement of various NLP tasks including lexical simplification, translations, and text g
In this paper we propose a contextual attention based model with two-stage fine-tune training using RoBERTa. First, we perform the first-stage fine-tune on corpus with RoBERTa, so that the model can learn some prior domain knowledge. Then we get the
In this contribution, we describe the system presented by the PolyU CBS-Comp Team at the Task 1 of SemEval 2021, where the goal was the estimation of the complexity of words in a given sentence context. Our top system, based on a combination of lexic