نقوم بتطوير نهج رواية للاستدلال بثقة في المحولات متعددة الطبقات الكبيرة والمكلفة الآن في كل مكان في معالجة اللغة الطبيعية (NLP).تؤدي الأساليب الحسابية المطفأة أو التقريبية إلى زيادة الكفاءة، ولكن يمكن أن تأتي مع تكاليف أداء غير متوقعة.في هذا العمل، نقدم القطط - محولات تكيفية واثقة - حيث نزيد في وقت واحد من الكفاءة الحسابية، مع ضمان درجة تحديد الحاسمة مع النموذج الأصلي بثقة عالية.تقوم طريقةنا بتدريب رؤوس التنبؤ الإضافية على رأس الطبقات الوسيطة، وتقريرها بشكل حيوي عند إيقاف تخصيص الجهود الحسابية لكل إدخال باستخدام مصنف تناسق التعريف.لمعايرة التوقعات المبكرة لدينا الحكم، نقوم بصياغة امتداد فريد من التنبؤ المطابق.نوضح فعالية هذا النهج في أربعة مهام التصنيف والانحدار.
We develop a novel approach for confidently accelerating inference in the large and expensive multilayer Transformers that are now ubiquitous in natural language processing (NLP). Amortized or approximate computational methods increase efficiency, but can come with unpredictable performance costs. In this work, we present CATs -- Confident Adaptive Transformers -- in which we simultaneously increase computational efficiency, while guaranteeing a specifiable degree of consistency with the original model with high confidence. Our method trains additional prediction heads on top of intermediate layers, and dynamically decides when to stop allocating computational effort to each input using a meta consistency classifier. To calibrate our early prediction stopping rule, we formulate a unique extension of conformal prediction. We demonstrate the effectiveness of this approach on four classification and regression tasks.
References used
https://aclanthology.org/
This work demonstrates the development process of a machine learning architecture for inference that can scale to a large volume of requests. We used a BERT model that was fine-tuned for emotion analysis, returning a probability distribution of emoti
We study Comparative Preference Classification (CPC) which aims at predicting whether a preference comparison exists between two entities in a given sentence and, if so, which entity is preferred over the other. High-quality CPC models can significan
Formal semantics in the Montagovian tradition provides precise meaning characterisations, but usually without a formal theory of the pragmatics of contextual parameters and their sensitivity to background knowledge. Meanwhile, formal pragmatic theori
Transformer and its variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose , a highly efficient i
We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations in-comparison with the lower an