لقد ظهر التعلم الناقض كطريقة لتعلم التمثيل القوي ويسهل العديد من المهام المصب المختلفة خاصة عندما تكون البيانات الخاضعة للإشراف محدودة. كيفية بناء عينات مضاءة فعالة من خلال تكبير البيانات هي مفتاح نجاحها. على عكس مهام الرؤية، لم يتم التحقيق في طريقة تكبير البيانات للتعلم المتعاقل بما فيه الكفاية في مهام اللغة. في هذه الورقة، نقترح نهج رواية لبناء عينات صغيرة لمهام اللغة باستخدام تلخيص النص. نحن نستخدم هذه العينات للتعلم المتعاقل الخاضع للإشراف للحصول على تمثيلات نصية أفضل التي تنفصل إلى حد كبير مهام تصنيف النص بشراحي محدود. لمزيد من تحسين الطريقة، نخلط عينات من فئات مختلفة وإضافة تنظيم إضافي، يدعى Mixsum، بالإضافة إلى فقدان الانتروبيا. توضح التجارب في مجموعات بيانات تصنيف النص العالمي الحقيقي (Amazon-5، YELP-5، AG News، IMDB) فعالية إطار التعلم المقاوم للضبط المقترح مع تكبير البيانات القائمة على التلخصات وانتظام Mixsum.
Contrastive Learning has emerged as a powerful representation learning method and facilitates various downstream tasks especially when supervised data is limited. How to construct efficient contrastive samples through data augmentation is key to its success. Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks. In this paper, we propose a novel approach to construct contrastive samples for language tasks using text summarization. We use these samples for supervised contrastive learning to gain better text representations which greatly benefit text classification tasks with limited annotations. To further improve the method, we mix up samples from different classes and add an extra regularization, named Mixsum, in addition to the cross-entropy-loss. Experiments on real-world text classification datasets (Amazon-5, Yelp-5, AG News, and IMDb) demonstrate the effectiveness of the proposed contrastive learning framework with summarization-based data augmentation and Mixsum regularization.
References used
https://aclanthology.org/
In this paper, we present coreference resolution experiments with a newly created multilingual corpus CorefUD (Nedoluzhko et al.,2021). We focus on the following languages: Czech, Russian, Polish, German, Spanish, and Catalan. In addition to monoling
Recent work has demonstrated the vulnerability of modern text classifiers to universal adversarial attacks, which are input-agnostic sequences of words added to text processed by classifiers. Despite being successful, the word sequences produced in s
Continual learning has become increasingly important as it enables NLP models to constantly learn and gain knowledge over time. Previous continual learning methods are mainly designed to preserve knowledge from previous tasks, without much emphasis o
Difficult samples of the minority class in imbalanced text classification are usually hard to be classified as they are embedded into an overlapping semantic region with the majority class. In this paper, we propose a Mutual Information constrained S
Media coverage has a substantial effect on the public perception of events. Nevertheless, media outlets are often biased. One way to bias news articles is by altering the word choice. The automatic identification of bias by word choice is challenging