كانت النماذج اللغوية الكبيرة المدربة مسبقا مثل بيرت القوة الدافعة وراء التحسينات الأخيرة في العديد من مهام NLP.ومع ذلك، يتم تدريب بيرت فقط على التنبؤ بالكلمات المفقودة - إما من خلال اخفاء أو تنبؤ الجملة التالي - وليس لديه معرفة بالمعلومات المعجمية أو النحوية أو الدلالية التي تتجاوز ما يلتقطه من خلال التدريب المسبق غير المدعوم.نقترح طريقة جديدة لحقن المعلومات اللغوية بشكل صريح في شكل embeddings في أي طبقة من بيرت المدرب مسبقا.عند ضمانات المضادات المضادة ومقرها التبعية، تشير تحسينات الأداء على مجموعات بيانات التشابه الدلالية المتعددة إلى أن هذه المعلومات مفيدة وفقدها حاليا من النموذج الأصلي.يوضح تحليلنا النوعي أن حقن التضمين المضاد للأدمان مفيد بشكل خاص، مع تحسينات ملحوظة على الأمثلة التي تتطلب دقة مرادف.
Large pre-trained language models such as BERT have been the driving force behind recent improvements across many NLP tasks. However, BERT is only trained to predict missing words -- either through masking or next sentence prediction -- and has no knowledge of lexical, syntactic or semantic information beyond what it picks up through unsupervised pre-training. We propose a novel method to explicitly inject linguistic information in the form of word embeddings into any layer of a pre-trained BERT. When injecting counter-fitted and dependency-based embeddings, the performance improvements on multiple semantic similarity datasets indicate that such information is beneficial and currently missing from the original model. Our qualitative analysis shows that counter-fitted embedding injection is particularly beneficial, with notable improvements on examples that require synonym resolution.
References used
https://aclanthology.org/
Human language encompasses more than just text; it also conveys emotions through tone and gestures. We present a case study of three simple and efficient Transformer-based architectures for predicting sentiment and emotion in multimodal data. The Lat
Best-worst Scaling (BWS) is a methodology for annotation based on comparing and ranking instances, rather than classifying or scoring individual instances. Studies have shown the efficacy of this methodology applied to NLP tasks in terms of a higher
The current natural language processing is strongly focused on raising accuracy. The progress comes at a cost of super-heavy models with hundreds of millions or even billions of parameters. However, simple syntactic tasks such as part-of-speech (POS)
People convey their intention and attitude through linguistic styles of the text that they write. In this study, we investigate lexicon usages across styles throughout two lenses: human perception and machine word importance, since words differ in th
Linguistic typology is an area of linguistics concerned with analysis of and comparison between natural languages of the world based on their certain linguistic features. For that purpose, historically, the area has relied on manual extraction of lin