تحديد ما إذا كانت الكلمة تحمل نفس المعنى أو المعنى المختلف في سياقتين هي منطقة بحثية مهمة في معالجة اللغة الطبيعية تلعب دورا مهما في العديد من التطبيقات مثل الإجابة على الأسئلة، وملخص الوثائق، واسترجاع المعلومات واستخراج المعلومات واستخراج المعلومات.يعتمد معظم العمل السابق في هذا المجال على الموارد الخاصة باللغة مما يجعل من الصعب التعميم عبر اللغات.النظر في هذا القيد، فإن نهجنا في مهمة Semeval-2021 يعتمد فقط على نماذج محول مسبقا ولا يستخدم أي معالجة وموارد خاصة باللغة.على الرغم من ذلك، يحقق أفضل نموذج لدينا 0.90 دقة للترقيط الفرعي الإنجليزي الإنجليزي وهو متوافق للغاية مقارنة بأفضل نتيجة الترجمة الفرعية؛0.93 دقة.نهجنا يحقق أيضا نتائج مرضية في أزواج لغة أحادية الألوان وغير اللغوية الأخرى أيضا.
Identifying whether a word carries the same meaning or different meaning in two contexts is an important research area in natural language processing which plays a significant role in many applications such as question answering, document summarisation, information retrieval and information extraction. Most of the previous work in this area rely on language-specific resources making it difficult to generalise across languages. Considering this limitation, our approach to SemEval-2021 Task 2 is based only on pretrained transformer models and does not use any language-specific processing and resources. Despite that, our best model achieves 0.90 accuracy for English-English subtask which is very compatible compared to the best result of the subtask; 0.93 accuracy. Our approach also achieves satisfactory results in other monolingual and cross-lingual language pairs as well.
References used
https://aclanthology.org/
In this paper, we introduce the first SemEval task on Multilingual and Cross-Lingual Word-in-Context disambiguation (MCL-WiC). This task allows the largely under-investigated inherent ability of systems to discriminate between word senses within and
This paper presents a word-in-context disambiguation system. The task focuses on capturing the polysemous nature of words in a multilingual and cross-lingual setting, without considering a strict inventory of word meanings. The system applies Natural
We experiment with XLM RoBERTa for Word in Context Disambiguation in the Multi Lingual and Cross Lingual setting so as to develop a single model having knowledge about both settings. We solve the problem as a binary classification problem and also ex
In this work, we present our approach for solving the SemEval 2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC). The task is a sentence pair classification problem where the goal is to detect whether a given word co
This paper describes our submission to SemEval 2021 Task 2. We compare XLM-RoBERTa Base and Large in the few-shot and zero-shot settings and additionally test the effectiveness of using a k-nearest neighbors classifier in the few-shot setting instead