نقدم شكل جديد من طريقة الفرقة - داعية الشيطان، والذي يستخدم نموذجا مختلفا عمدا لفرض السفلات الأخرى داخل الفرقة للتعاون بشكل أفضل.تتكون طريقتنا من إعدادات تدريبية مختلفة: يتبع المرء عملية التدريب التقليدية (القاعدة)، والآخر يتم تدريبه بواسطة ملصقات تم إنشاؤها بشكل مصطنع (DevAfv).بعد تدريب النماذج، يتم ضبط نماذج القواعد بشكل جيد من خلال وظيفة خسارة إضافية، والتي تستخدم نموذج DevAdh كعائق.في اتخاذ قرار نهائي، يبلغ نموذج الفرقة المقترح درجات نماذج المعايير ثم يطرح نتيجة نموذج DevAdh.يحسن نموذج DevAPAdh الأداء العام للنماذج الأخرى داخل الفرقة.بالإضافة إلى إطار عملنا الذي يعتمد على الخلفية النفسية، فإنه يظهر أيضا أداء مماثل أو محسن على 5 مهام تصنيف النص عند مقارنته بطرق الفرقة التقليدية.
We present a new form of ensemble method--Devil's Advocate, which uses a deliberately dissenting model to force other submodels within the ensemble to better collaborate. Our method consists of two different training settings: one follows the conventional training process (Norm), and the other is trained by artificially generated labels (DevAdv). After training the models, Norm models are fine-tuned through an additional loss function, which uses the DevAdv model as a constraint. In making a final decision, the proposed ensemble model sums the scores of Norm models and then subtracts the score of the DevAdv model. The DevAdv model improves the overall performance of the other models within the ensemble. In addition to our ensemble framework being based on psychological background, it also shows comparable or improved performance on 5 text classification tasks when compared to conventional ensemble methods.
References used
https://aclanthology.org/
Natural Language Processing (NLP) is increasingly relying on general end-to-end systems that need to handle many different linguistic phenomena and nuances. For example, a Natural Language Inference (NLI) system has to recognize sentiment, handle num
Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task. We aim to scrub such undesirable attributes and learn fair re
This paper proposes AEDA (An Easier Data Augmentation) technique to help improve the performance on text classification tasks. AEDA includes only random insertion of punctuation marks into the original text. This is an easier technique to implement f
Meta-learning has achieved great success in leveraging the historical learned knowledge to facilitate the learning process of the new task. However, merely learning the knowledge from the historical tasks, adopted by current meta-learning algorithms,
Older legal texts are often scanned and digitized via Optical Character Recognition (OCR), which results in numerous errors. Although spelling and grammar checkers can correct much of the scanned text automatically, Named Entity Recognition (NER) is