تقدم هذه الورقة مجموعة من التجارب لتقييم ومقارنتها بين أداء استخدام نماذج CBOW Word2VEC و Lemma2vec للغموض باللغة العربية في السياق (WIC) دون استخدام مخزونات الإحساس أو Asbeddings المعنى.كجزء من المهمة المشتركة Semeval-2021 2 على Devambiguation WIC، استخدمنا DEV.AR-AR-ARSET (أزواج الجملة 2K) لتحديد ما إذا كانت كلمتين في زوج جملة معينة تحمل نفس المعنى.استخدمنا نماذج Word2vec: Wiki-cbow، وهو نموذج مدرب مسبقا على Wikipedia العرب، ونموذج آخر تدربنا على كورسا عربية كبيرة من حوالي 3 مليارات رموز.كما تم بناء نماذج LEMMA2VEC بناء على نماذج Word2vec.بعد ذلك تم استخدام كل من النماذج الأربعة في مهمة Disambiguation WIC، ثم يتم تقييمها على DataSet Semeval-2021 Test.ar-Ar-Ar.في النهاية، أبلغنا عن أداء النماذج المختلفة ومقارنتها بين استخدام النماذج القائمة على Lemma ومقرها الكلمات.
This paper presents a set of experiments to evaluate and compare between the performance of using CBOW Word2Vec and Lemma2Vec models for Arabic Word-in-Context (WiC) disambiguation without using sense inventories or sense embeddings. As part of the SemEval-2021 Shared Task 2 on WiC disambiguation, we used the dev.ar-ar dataset (2k sentence pairs) to decide whether two words in a given sentence pair carry the same meaning. We used two Word2Vec models: Wiki-CBOW, a pre-trained model on Arabic Wikipedia, and another model we trained on large Arabic corpora of about 3 billion tokens. Two Lemma2Vec models was also constructed based on the two Word2Vec models. Each of the four models was then used in the WiC disambiguation task, and then evaluated on the SemEval-2021 test.ar-ar dataset. At the end, we reported the performance of different models and compared between using lemma-based and word-based models.
References used
https://aclanthology.org/
This paper presents a word-in-context disambiguation system. The task focuses on capturing the polysemous nature of words in a multilingual and cross-lingual setting, without considering a strict inventory of word meanings. The system applies Natural
This paper presents the PALI team's winning system for SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation. We fine-tune XLM-RoBERTa model to solve the task of word in context disambiguation, i.e., to determine whether
In this paper, we introduce the first SemEval task on Multilingual and Cross-Lingual Word-in-Context disambiguation (MCL-WiC). This task allows the largely under-investigated inherent ability of systems to discriminate between word senses within and
This paper presents our approaches to SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation task. The first approach attempted to reformulate the task as a question answering problem, while the second one framed it as a b
In this paper, we present a system for the solution of the cross-lingual and multilingual word-in-context disambiguation task. Task organizers provided monolingual data in several languages, but no cross-lingual training data were available. To addre