طرق التعلم، وخاصة تكيف المجال، تساعد في استغلال البيانات المسمى في مجال واحد لتحسين أداء مهمة معينة في مجال آخر.ومع ذلك، لا يزال من غير الواضح العوامل التي تؤثر على نجاح تكيف المجال.نماذج الورق هذه النجاح ونجاح واختيار المجالات المصدر الأكثر ملاءمة بين العديد من المرشحين في تشابه النص.نستخدم معلومات المجال الوصفية ومقاييس التشابه عبر المجال كيزات تنبؤية.في الغالب إيجابية، تشير النتائج أيضا إلى بعض المجالات حيث كان من الصعب التنبؤ بنجاح التكيف.
Transfer learning methods, and in particular domain adaptation, help exploit labeled data in one domain to improve the performance of a certain task in another domain. However, it is still not clear what factors affect the success of domain adaptation. This paper models adaptation success and selection of the most suitable source domains among several candidates in text similarity. We use descriptive domain information and cross-domain similarity metrics as predictive features. While mostly positive, the results also point to some domains where adaptation success was difficult to predict.
References used
https://aclanthology.org/
The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models th
In this paper, we present a novel approachfor domain adaptation in Neural MachineTranslation which aims to improve thetranslation quality over a new domain.Adapting new domains is a highly challeng-ing task for Neural Machine Translation onlimited da
Recent work has demonstrated that pre-training in-domain language models can boost performance when adapting to a new domain. However, the costs associated with pre-training raise an important question: given a fixed budget, what steps should an NLP
This paper presents the Source-Free Domain Adaptation shared task held within SemEval-2021. The aim of the task was to explore adaptation of machine-learning models in the face of data sharing constraints. Specifically, we consider the scenario where
In this paper, we propose a simple few-shot domain adaptation paradigm for reading comprehension. We first identify the lottery subnetwork structure within the Transformer-based source domain model via gradual magnitude pruning. Then, we only fine-tu