وصفنا أنظمة جامعة ألبرتا لمهمة الغموض في السياق Semeval-2021 (WIC).نستكشف استخدام معلومات الترجمة لتحديد ما إذا كان هناك رموزان مختلفان من نفس الكلمة يتوافق مع نفس الشعور بالكلمة.يركز تركيزنا على تطوير النهج النظرية المبدئية التي ترتكز في الظواهر اللغوية، مما يؤدي إلى نماذج أكثر قابل للتفسير.نظهر أن الترجمات من لغات متعددة يمكن أن يتم الاستفادة منها لتحسين الدقة في مهمة WIC.
We describe the University of Alberta systems for the SemEval-2021 Word-in-Context (WiC) disambiguation task. We explore the use of translation information for deciding whether two different tokens of the same word correspond to the same sense of the word. Our focus is on developing principled theoretical approaches which are grounded in linguistic phenomena, leading to more explainable models. We show that translations from multiple languages can be leveraged to improve the accuracy on the WiC task.
References used
https://aclanthology.org/
This paper describes our submission to SemEval 2021 Task 2. We compare XLM-RoBERTa Base and Large in the few-shot and zero-shot settings and additionally test the effectiveness of using a k-nearest neighbors classifier in the few-shot setting instead
In this paper, we describe our proposed methods for the multilingual word-in-Context disambiguation task in SemEval-2021. In this task, systems should determine whether a word that occurs in two different sentences is used with the same meaning or no
This paper presents the GX system for the Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC) task. The purpose of the MCL-WiC task is to tackle the challenge of capturing the polysemous nature of words without relying on a fixed
This paper presents the PALI team's winning system for SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation. We fine-tune XLM-RoBERTa model to solve the task of word in context disambiguation, i.e., to determine whether
This paper describes the system of the Cambridge team submitted to the SemEval-2021 shared task on Multilingual and Cross-lingual Word-in-Context Disambiguation. Building on top of a pre-trained masked language model, our system is first pre-trained