تزيين الكلمات المتبقية كلمة (CLWES) ترميز الكلمات من لغتين أو أكثر في مساحة مشتركة عالية الأبعاد التي تمثل ناقلات تمثل الكلمات ذات معنى مماثل (بغض النظر عن اللغة) عن كثب. تعلم الأساليب الحالية لبناء تعيينات CLWES عالية الجودة التي تقلل من وظيفة خسارة المعايير ℓ2. ومع ذلك، فقد ثبت أن هدف التحسين هذا هو حساسا للقيم المتطرفة. بناء على قاعدة مانهاتن الأكثر قوة (AKA. ℓ1 NORM) معيار الجيدة، تقترح هذه الورقة خطوة بسيطة بعد المعالجة لتحسين CLWES. ميزة هذا النهج هي أنه غير ملائم تماما للعملية التدريبية للفقر الأصلي ويمكن تطبيقها على نطاق واسع. يتم إجراء تجارب واسعة النطاق التي تنطوي على عشرة لغات متنوعة وموظفة مدربة على شريعة مختلفة. تظهر نتائج التقييم المستندة إلى تحريض المعجم الثنائي اللغة والتحويل عبر اللغات لمهام الاستدلال باللغة الطبيعية أن الصقل ℓ1 يتفوق بشكل كبير على أربعة خطوط خطوط خطوط خطوط أخرى في كل من الإعدادات الإشرافية غير المشرف. لذلك يوصى باعتماد هذه الاستراتيجية كمعيار لأساليب CLWE.
Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages in a shared high-dimensional space in which vectors representing words with similar meaning (regardless of language) are closely located. Existing methods for building high-quality CLWEs learn mappings that minimise the ℓ2 norm loss function. However, this optimisation objective has been demonstrated to be sensitive to outliers. Based on the more robust Manhattan norm (aka. ℓ1 norm) goodness-of-fit criterion, this paper proposes a simple post-processing step to improve CLWEs. An advantage of this approach is that it is fully agnostic to the training process of the original CLWEs and can therefore be applied widely. Extensive experiments are performed involving ten diverse languages and embeddings trained on different corpora. Evaluation results based on bilingual lexicon induction and cross-lingual transfer for natural language inference tasks show that the ℓ1 refinement substantially outperforms four state-of-the-art baselines in both supervised and unsupervised settings. It is therefore recommended that this strategy be adopted as a standard for CLWE methods.
References used
https://aclanthology.org/
Word embedding is essential for neural network models for various natural language processing tasks. Since the word embedding usually has a considerable size, in order to deploy a neural network model having it on edge devices, it should be effective
Word embeddings are powerful representations that form the foundation of many natural language processing architectures, both in English and in other languages. To gain further insight into word embeddings, we explore their stability (e.g., overlap b
Sarcasm detection is of great importance in understanding people's true sentiments and opinions. Many online feedbacks, reviews, social media comments, etc. are sarcastic. Several researches have already been done in this field, but most researchers
In this work, we analyze the performance and properties of cross-lingual word embedding models created by mapping-based alignment methods. We use several measures of corpus and embedding similarity to predict BLI scores of cross-lingual embedding map
Adapting word order from one language to another is a key problem in cross-lingual structured prediction. Current sentence encoders (e.g., RNN, Transformer with position embeddings) are usually word order sensitive. Even with uniform word form repres