تدرس هذه الورقة التعلم المستمر (CL) بتسلسل مهام تصنيف معنويات الجانب (ASC).على الرغم من اقتراح بعض تقنيات CL لتصنيف معنويات المستندات، إلا أننا لسنا على علم بأي عمل CL على ASC.يجب أن يتعلم نظام CL الذي يتعلم تدريجيا سلسلة من مهام ASC المشكلتين التاليين: (1) نقل المعرفة المستفادة من المهام السابقة إلى المهمة الجديدة للمساعدة في تعلم نموذج أفضل، و (2) الحفاظ على أداء النماذجالمهام السابقة بحيث لا تنسى.تقترح هذه الورقة نموذجا قائم على شبكة كبسولة رواية يسمى B-CL لمعالجة هذه المشكلات.ب-CL يحسن بشكل ملحوظ أداء ASC على كل من المهمة الجديدة والمهام القديمة عبر نقل المعرفة للأمام والخلف.يتم إثبات فعالية B-CL من خلال تجارب واسعة.
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks. Although some CL techniques have been proposed for document sentiment classification, we are not aware of any CL work on ASC. A CL system that incrementally learns a sequence of ASC tasks should address the following two issues: (1) transfer knowledge learned from previous tasks to the new task to help it learn a better model, and (2) maintain the performance of the models for previous tasks so that they are not forgotten. This paper proposes a novel capsule network based model called B-CL to address these issues. B-CL markedly improves the ASC performance on both the new task and the old tasks via forward and backward knowledge transfer. The effectiveness of B-CL is demonstrated through extensive experiments.
References used
https://aclanthology.org/
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks in a particular CL setting called domain incremental learning (DIL). Each task is from a different domain or product. The DIL setting is particula
Abstract We introduce a theoretical framework for understanding and predicting the complexity of sequence classification tasks, using a novel extension of the theory of Boolean function sensitivity. The sensitivity of a function, given a distribution
Recent work on aspect-level sentiment classification has demonstrated the efficacy of incorporating syntactic structures such as dependency trees with graph neural networks (GNN), but these approaches are usually vulnerable to parsing errors. To bett
Bidirectional Encoder Representations from Transformers (BERT) has achieved state-of-the-art performances on several text classification tasks, such as GLUE and sentiment analysis. Recent work in the legal domain started to use BERT on tasks, such as
Large volumes of interaction logs can be collected from NLP systems that are deployed in the real world. How can this wealth of information be leveraged? Using such interaction logs in an offline reinforcement learning (RL) setting is a promising app