في السنوات القليلة الماضية، تم اقتراح العديد من الطرق لبناء تضمين التوطين.كان الهدف العام هو الحصول على تمثيلات جديدة تدمج المعرفة التكميلية من مختلف المدينات المدربة مسبقا مما يؤدي إلى تحسين الجودة الشاملة.ومع ذلك، تم تقييم Enterpaintings Meta-embeddings السابق باستخدام مجموعة متنوعة من الأساليب ومجموعات البيانات، مما يجعل من الصعب استخلاص استنتاجات ذات مغزى بشأن مزايا كل منهج.في هذه الورقة نقترح إطارا مشتركا موحدا، بما في ذلك المهام الجوهرية والخارجية، من أجل تقييم عادل وموضوعي لتقييم التوطين.علاوة على ذلك، نقدم طريقة جديدة لتوليد تضمين التوطين، مما يفوقن العمل السابق على عدد كبير من معايير التقييم الجوهرية.كما يتيح لنا إطار التقييم أن نستنتج أن التقييمات الخارجية السابقة للمضفة المتمثلة في المبالغة في تقديرها.
In the last few years, several methods have been proposed to build meta-embeddings. The general aim was to obtain new representations integrating complementary knowledge from different source pre-trained embeddings thereby improving their overall quality. However, previous meta-embeddings have been evaluated using a variety of methods and datasets, which makes it difficult to draw meaningful conclusions regarding the merits of each approach. In this paper we propose a unified common framework, including both intrinsic and extrinsic tasks, for a fair and objective meta-embeddings evaluation. Furthermore, we present a new method to generate meta-embeddings, outperforming previous work on a large number of intrinsic evaluation benchmarks. Our evaluation framework also allows us to conclude that previous extrinsic evaluations of meta-embeddings have been overestimated.
References used
https://aclanthology.org/
The introduction of pre-trained transformer-based contextualized word embeddings has led to considerable improvements in the accuracy of graph-based parsers for frameworks such as Universal Dependencies (UD). However, previous works differ in various
Entity Linking (EL) systems have achieved impressive results on standard benchmarks mainly thanks to the contextualized representations provided by recent pretrained language models. However, such systems still require massive amounts of data -- mill
Dialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series. However, real-world dialogues pose a great challenge to current summarization models, as the dialogue length typically ex
Abusive language detection is an emerging field in natural language processing which has received a large amount of attention recently. Still the success of automatic detection is limited. Particularly, the detection of implicitly abusive language, i
Word embeddings are widely used in Natural Language Processing (NLP) for a vast range of applications. However, it has been consistently proven that these embeddings reflect the same human biases that exist in the data used to train them. Most of the