نماذج المحولات باهظة الثمن لحن النغمة، والبطيئة للتناسم، ولديها متطلبات تخزين كبيرة.تتناول النهج الحديثة هذه أوجه القصور عن طريق تدريب النماذج الأصغر، مما يقلل ديناميكيا حجم النموذج، وتدريب محولات الوزن الخفيف.في هذه الورقة، نقترح Adapterdrop، وإزالة محولات من طبقات محول أقل أثناء التدريب والاستدلال، مما يشتمل على المفاهيم من الاتجاهات الثلاثة.نظهر أن Adapterdrop يمكن أن تقلل ديناميكيا من العلامة الحسابية الحسابية عند إجراء الاستدلال على مهام متعددة في وقت واحد، مع انخفاض الحد الأدنى في عروض العمل.سنقوم بمزيد من المحولات من Adaperfusion، مما يحسن كفاءة الاستدلال مع الحفاظ على أداء العمل بالكامل.
Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training light-weight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that AdapterDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.
References used
https://aclanthology.org/
Improving Transformer efficiency has become increasingly attractive recently. A wide range of methods has been proposed, e.g., pruning, quantization, new architectures and etc. But these methods are either sophisticated in implementation or dependent
Pre-trained language models perform well on a variety of linguistic tasks that require symbolic reasoning, raising the question of whether such models implicitly represent abstract symbols and rules. We investigate this question using the case study
Recent progress in natural language processing has led to Transformer architectures becoming the predominant model used for natural language tasks. However, in many real- world datasets, additional modalities are included which the Transformer does n
For many tasks, state-of-the-art results have been achieved with Transformer-based architectures, resulting in a paradigmatic shift in practices from the use of task-specific architectures to the fine-tuning of pre-trained language models. The ongoin
Incremental processing allows interactive systems to respond based on partial inputs, which is a desirable property e.g. in dialogue agents. The currently popular Transformer architecture inherently processes sequences as a whole, abstracting away th