تعد العديد من المهام التسلسلية للتسلسل في معالجة اللغات الطبيعية رتيبة تقريبا في المحاذاة بين المصدر وتسلسل المستهدف، وقد سهل العمل السابق أو إنفاذ سلوك الانتباه الرعبي عبر وظائف الاهتمام المتخصص أو المحاكمة.في هذا العمل، نقدم وظيفة خسارة رتابة متوافقة مع آليات الاهتمام القياسية واختبارها في العديد من المهام التسلسلية للتسلسل: تحويل Grapheme-to-funeme، انعطاف مورفولوجي، والترجمة، وتطبيع اللهجة.تظهر التجارب أننا نستطيع تحقيق سلوك رتيب إلى حد كبير.يتم خلط الأداء، مع مكاسب أكبر على رأس خطوط الأساس RNN.ومع ذلك، فإن عام الرتابة العامة لا يفيد اهتمام محول متعدد الشعر، ومع ذلك، فإننا نرى تحسينات معزولة عندما تكون مجموعة فرعية فقط من الرؤوس منحازة نحو السلوك الرتيب.
Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining. In this work, we introduce a monotonicity loss function that is compatible with standard attention mechanisms and test it on several sequence-to-sequence tasks: grapheme-to-phoneme conversion, morphological inflection, transliteration, and dialect normalization. Experiments show that we can achieve largely monotonic behavior. Performance is mixed, with larger gains on top of RNN baselines. General monotonicity does not benefit transformer multihead attention, however, we see isolated improvements when only a subset of heads is biased towards monotonic behavior.
References used
https://aclanthology.org/
In this work, we conduct a comprehensive investigation on one of the centerpieces of modern machine translation systems: the encoder-decoder attention mechanism. Motivated by the concept of first-order alignments, we extend the (cross-)attention mech
Self-supervised learning has recently attracted considerable attention in the NLP community for its ability to learn discriminative features using a contrastive objective. This paper investigates whether contrastive learning can be extended to Transf
Recently, it has been argued that encoder-decoder models can be made more interpretable by replacing the softmax function in the attention with its sparse variants. In this work, we introduce a novel, simple method for achieving sparsity in attention
Transformer-based methods are appealing for multilingual text classification, but common research benchmarks like XNLI (Conneau et al., 2018) do not reflect the data availability and task variety of industry applications. We present an empirical comp
Intermediate layer matching is shown as an effective approach for improving knowledge distillation (KD). However, this technique applies matching in the hidden spaces of two different networks (i.e. student and teacher), which lacks clear interpretab