تدرس هذه الورقة التعلم المستمر (CL) من تسلسل مهام تصنيف معنويات الجانب (ASC) في إعداد CL معين يسمى التعلم الإضافي للمجال (DIL).كل مهمة هي من مجال أو منتج مختلف.يعد إعداد DIL مناسبا بشكل خاص للأشعة السوداء لأنه في اختبار لا يحتاج النظام إلى معرفة المهمة / المجال التي تنتمي إليها بيانات الاختبار.لمعرفةنا، لم تتم دراسة هذا الإعداد من قبل للحصول على ASC.تقترح هذه الورقة نموذجا جديدا يسمى الكلاسيكية.الجدة الرئيسية هي طريقة تعلم مستمرة مناقصة تمكن من نقل المعرفة عبر المهام وتقطير المعرفة من المهام القديمة إلى المهمة الجديدة، مما يلغي الحاجة إلى معرفات المهام في الاختبار.النتائج التجريبية تظهر فعالية عالية من الكلاسيكية.
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks in a particular CL setting called domain incremental learning (DIL). Each task is from a different domain or product. The DIL setting is particularly suited to ASC because in testing the system needs not know the task/domain to which the test data belongs. To our knowledge, this setting has not been studied before for ASC. This paper proposes a novel model called CLASSIC. The key novelty is a contrastive continual learning method that enables both knowledge transfer across tasks and knowledge distillation from old tasks to the new task, which eliminates the need for task ids in testing. Experimental results show the high effectiveness of CLASSIC.
References used
https://aclanthology.org/
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks. Although some CL techniques have been proposed for document sentiment classification, we are not aware of any CL work on ASC. A CL system that in
Recent work on aspect-level sentiment classification has demonstrated the efficacy of incorporating syntactic structures such as dependency trees with graph neural networks (GNN), but these approaches are usually vulnerable to parsing errors. To bett
Neural machine translation (NMT) models are data-driven and require large-scale training corpus. In practical applications, NMT models are usually trained on a general domain corpus and then fine-tuned by continuing training on the in-domain corpus.
Continual learning in task-oriented dialogue systems allows the system to add new domains and functionalities overtime after deployment, without incurring the high cost of retraining the whole system each time. In this paper, we propose a first-ever
Aspect-level sentiment classification (ALSC) aims at identifying the sentiment polarity of a specified aspect in a sentence. ALSC is a practical setting in aspect-based sentiment analysis due to no opinion term labeling needed, but it fails to interp