يعد التكيف المجال المجاني للمصدر خطا ناشئا في أبحاث التعلم العميق لأنه يرتبط ارتباطا وثيقا ببيئة العالم الحقيقي.ندرس مخصصات المجال في مشكلة تسلسل التسلسل حيث يتم تقديم الطراز الذي تم تدريبه على بيانات مجال المصدر.نقترح طريقتين: محول الذات وتدريب المصنف الانتقائي.المحول الذاتي هو طريقة تدريب تستخدم تسميات زائفة على مستوى الجملة التي تمت تصفيتها بواسطة عتبة الانتروب الذاتي لتوفير الإشراف على النموذج بأكمله.يستخدم التدريب الانتقائي مؤشر التسميات الزائفة على مستوى الرمز المميز ويشرف على طبقة التصنيف فقط من النموذج.يتم تقييم الأساليب المقترحة على البيانات التي توفرها مهمة Semeval-2021 10 وتحقيق المحول الذاتي أداء المرتبة الثانية.
Source-free domain adaptation is an emerging line of work in deep learning research since it is closely related to the real-world environment. We study the domain adaption in the sequence labeling problem where the model trained on the source domain data is given. We propose two methods: Self-Adapter and Selective Classifier Training. Self-Adapter is a training method that uses sentence-level pseudo-labels filtered by the self-entropy threshold to provide supervision to the whole model. Selective Classifier Training uses token-level pseudo-labels and supervises only the classification layer of the model. The proposed methods are evaluated on data provided by SemEval-2021 task 10 and Self-Adapter achieves 2nd rank performance.
References used
https://aclanthology.org/
This paper presents the Source-Free Domain Adaptation shared task held within SemEval-2021. The aim of the task was to explore adaptation of machine-learning models in the face of data sharing constraints. Specifically, we consider the scenario where
This paper describes our systems for negation detection and time expression recognition in SemEval 2021 Task 10, Source-Free Domain Adaptation for Semantic Processing. We show that self-training, active learning and data augmentation techniques can i
Domain adaptation assumes that samples from source and target domains are freely accessible during a training phase. However, such assumption is rarely plausible in the real-world and may causes data-privacy issues, especially when the label of the s
Data sharing restrictions are common in NLP datasets. The purpose of this task is to develop a model trained in a source domain to make predictions for a target domain with related domain data. To address the issue, the organizers provided the models
Due to the increasing concerns for data privacy, source-free unsupervised domain adaptation attracts more and more research attention, where only a trained source model is assumed to be available, while the labeled source data remain private. To get