تصف هذه الورقة مشاركة فريق UOB-NLP في SubTask SubTask المشترك 7A.كانت المهمة تهدف إلى اكتشاف ذكر المهن في نص وسائل التواصل الاجتماعي.جرب فريقنا بطريقتين لتحسين أداء النماذج المدربة مسبقا: على وجه التحديد، جربنا مع زيادة البيانات من خلال الترجمة ودمج المدخلات اللغوية المتعددة لتلبية هدف المهمة.في حين أن أفضل نموذج أداء في بيانات الاختبار تتألف من Mbert Tuned على البيانات المعززة باستخدام الترجمة الخلفية، فإن التحسن بسيطا ربما لأن النماذج المدربة مسبقا متعددة اللغات مثل Mbert لديها بالفعل الوصول إلى نوع المعلومات المقدمة من خلال الخلف- البيانات والبيانات ثنائية اللغة.
This paper describes the participation of the UoB-NLP team in the ProfNER-ST shared subtask 7a. The task was aimed at detecting the mention of professions in social media text. Our team experimented with two methods of improving the performance of pre-trained models: Specifically, we experimented with data augmentation through translation and the merging of multiple language inputs to meet the objective of the task. While the best performing model on the test data consisted of mBERT fine-tuned on augmented data using back-translation, the improvement is minor possibly because multi-lingual pre-trained models such as mBERT already have access to the kind of information provided through back-translation and bilingual data.
References used
https://aclanthology.org/
We propose a data augmentation method for neural machine translation. It works by interpreting language models and phrasal alignment causally. Specifically, it creates augmented parallel translation corpora by generating (path-specific) counterfactua
Data augmentation, which refers to manipulating the inputs (e.g., adding random noise,masking specific parts) to enlarge the dataset,has been widely adopted in machine learning. Most data augmentation techniques operate on a single input, which limit
This paper describes Lingua Custodia's submission to the WMT21 shared task on machine translation using terminologies. We consider three directions, namely English to French, Russian, and Chinese. We rely on a Transformer-based architecture as a buil
Recently, neural machine translation is widely used for its high translation accuracy, but it is also known to show poor performance at long sentence translation. Besides, this tendency appears prominently for low resource languages. We assume that t
Sign language translation (SLT) is often decomposed into video-to-gloss recognition and gloss to-text translation, where a gloss is a sequence of transcribed spoken-language words in the order in which they are signed. We focus here on gloss-to-text