توفر Argeddings Word عبر اللغات طريقة للمعلومات التي سيتم نقلها بين اللغات.في هذه الورقة، نقيم امتدادا لنهج تدريب مشترك لتعلم التضامن المتبادل الذي يتضمن معلومات الفرعية أثناء التدريب.قد تكون هذه الطريقة مناسبة بشكل خاص لأنها لغات منخفضة الموارد ولغات غنية بالمظورة لأنه يمكن تدريبها على سورانيا أحادية الحجم متواضعة، وهي قادرة على تمثيل الكلمات الخارجية (OOVS).نحن نعتبر تحديي المعجم الثنائي اللغة، بما في ذلك التقييم يركز على OOVs.نجد أن هذه الطريقة تحقق تحسينات حول النهج السابقة، لا سيما بالنسبة إلى OOVS.
Cross-lingual word embeddings provide a way for information to be transferred between languages. In this paper we evaluate an extension of a joint training approach to learning cross-lingual embeddings that incorporates sub-word information during training. This method could be particularly well-suited to lower-resource and morphologically-rich languages because it can be trained on modest size monolingual corpora, and is able to represent out-of-vocabulary words (OOVs). We consider bilingual lexicon induction, including an evaluation focused on OOVs. We find that this method achieves improvements over previous approaches, particularly for OOVs.
References used
https://aclanthology.org/
We propose a new approach for learning contextualised cross-lingual word embeddings based on a small parallel corpus (e.g. a few hundred sentence pairs). Our method obtains word embeddings via an LSTM encoder-decoder model that simultaneously transla
Capturing word meaning in context and distinguishing between correspondences and variations across languages is key to building successful multilingual and cross-lingual text representation models. However, existing multilingual evaluation datasets t
The widespread presence of offensive language on social media motivated the development of systems capable of recognizing such content automatically. Apart from a few notable exceptions, most research on automatic offensive language identification ha
Generative adversarial networks (GANs) have succeeded in inducing cross-lingual word embeddings - maps of matching words across languages - without supervision. Despite these successes, GANs' performance for the difficult case of distant languages is
Cross-lingual word embeddings (CLWEs) have proven indispensable for various natural language processing tasks, e.g., bilingual lexicon induction (BLI). However, the lack of data often impairs the quality of representations. Various approaches requiri