حقق نماذج اللغة المدربة مسبقا بشكل جيد (LMS) نجاحا هائلا في العديد من مهام معالجة اللغة الطبيعية (NLP)، لكنها لا تزال تتطلب بيانات مفرطة الحجم في مرحلة ضبط الدقيقة. ندرس مشكلة LMS المدبرة مسبقا باستخدام إشراف ضعيف فقط، دون أي بيانات معدنية. هذه المشكلة تحديا لأن قدرة LMS عالية تجعلها عرضة للاحتفاظ بالملصقات الصاخبة الناتجة عن إشراف ضعيف. لمعالجة هذه المشكلة، نحن نطور إطارا للتدريب الذاتي للتناقض، جيب التمام، لتمكين LMS الرصيف مع إشراف ضعيف. تدعمه التنظيم البسيط والنعيد القائم على الثقة، فإن إطار عملائنا يحسن تدريجيا من تركيب النموذج مع قمع انتشار الأخطاء بشكل فعال. تشير التجارب على التسلسل، الرمز المميز، ومهام تصنيف زوج الزوج الحكم إلى أن نموذجنا يتفوق على أقوى خط أساس عن طريق الهوامش الكبيرة وتحقق أداء تنافسي مع أساليب ضبط صاخبة بالكامل. تنفيذنا متاح على https://github.com/yueyu1030/cosine.
Fine-tuned pre-trained language models (LMs) have achieved enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the fine-tuning stage. We study the problem of fine-tuning pre-trained LMs using only weak supervision, without any labeled data. This problem is challenging because the high capacity of LMs makes them prone to overfitting the noisy labels generated by weak supervision. To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision. Underpinned by contrastive regularization and confidence-based reweighting, our framework gradually improves model fitting while effectively suppressing error propagation. Experiments on sequence, token, and sentence pair classification tasks show that our model outperforms the strongest baseline by large margins and achieves competitive performance with fully-supervised fine-tuning methods. Our implementation is available on https://github.com/yueyu1030/COSINE.
References used
https://aclanthology.org/
State-of-the-art deep neural networks require large-scale labeled training data that is often expensive to obtain or not available for many tasks. Weak supervision in the form of domain-specific rules has been shown to be useful in such settings to a
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words i
Pre-trained language models (PLMs) like BERT have made great progress in NLP. News articles usually contain rich textual information, and PLMs have the potentials to enhance news text modeling for various intelligent news applications like news recom
Abstract Pre-trained language representation models (PLMs) cannot well capture factual knowledge from text. In contrast, knowledge embedding (KE) methods can effectively represent the relational facts in knowledge graphs (KGs) with informative entity
Metaphors are ubiquitous in natural language, and detecting them requires contextual reasoning about whether a semantic incongruence actually exists. Most existing work addresses this problem using pre-trained contextualized models. Despite their suc