مزيد من النماذج اللغوية المسبقة للتدريب على البيانات داخل المجال (التدريب المسبق مسبقا، Dapt) أو البيانات ذات الصلة (TAME-APT-APTICTIVE، TAPT) قبل أن تؤدي إلى تحسين أداء المهام المصب.ومع ذلك، في نمذجة الحوار الموجهة نحو المهام، نلاحظ أن مزيد من الامتيازات التدريبية قبل التدريب لا تعزز دائما الأداء في مهمة المصب.نجد أن DIST مفيد في إعداد الموارد المنخفضة، ولكن نظرا لأن حجم بيانات ضبط الرصيف ينمو، يصبح DIST أقل فائدة أو حتى عديمة الفائدة، وتوسيع نطاق حجم بيانات Dapt لا يساعد.من خلال تحليل التشابه التمثيلي، نستنتج أن المزيد من البيانات الخاصة بالضبط بشكل جيد غلة تغيير أكبر في تمثيلات النموذج وبالتالي تقلل من تأثير التهيئة.
Further pre-training language models on in-domain data (domain-adaptive pre-training, DAPT) or task-relevant data (task-adaptive pre-training, TAPT) before fine-tuning has been shown to improve downstream tasks' performances. However, in task-oriented dialog modeling, we observe that further pre-training MLM does not always boost the performance on a downstream task. We find that DAPT is beneficial in the low-resource setting, but as the fine-tuning data size grows, DAPT becomes less beneficial or even useless, and scaling the size of DAPT data does not help. Through Representational Similarity Analysis, we conclude that more data for fine-tuning yields greater change of the model's representations and thus reduces the influence of initialization.
References used
https://aclanthology.org/
As the labeling cost for different modules in task-oriented dialog (ToD) systems is expensive, a major challenge is to train different modules with the least amount of labeled data. Recently, large-scale pre-trained language models, have shown promis
Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this iss
Loading models pre-trained on the large-scale corpus in the general domain and fine-tuning them on specific downstream tasks is gradually becoming a paradigm in Natural Language Processing. Previous investigations prove that introducing a further pre
Generative models for dialog systems have gained much interest because of the recent success of RNN and Transformer based models in tasks like question answering and summarization. Although the task of dialog response generation is generally seen as
Document machine translation aims to translate the source sentence into the target language in the presence of additional contextual information. However, it typically suffers from a lack of doc-level bilingual data. To remedy this, here we propose a