نحن نبحث كيف يمكن تعديل محولات مستوى الجملة في وضع علامات تسلسل فعالة على مستوى الرمز المميز دون أي إشراف مباشر.لا تؤدي الأساليب الموجودة إلى وضع العلامات على التسلسل الصفرية جيدا عند تطبيقها على الهندسة القائمة على المحولات.نظرا لأن المحولات تحتوي على طبقات متعددة من اهتمام ذاتي متعدد الأطراف، فإن المعلومات الواردة في الجملة التي يتم توزيعها بين العديد من الرموز، مما يؤثر سلبا على أداء مستوى الرمز المميز من الصفر.نجد أن وحدة انتباه ناعمة تشجع صراحة على حدة الأوزان الاهتمام يمكن أن تتفوق بشكل كبير على الأساليب الحالية.
We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multi-head self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.
References used
https://aclanthology.org/
Predicting linearized Abstract Meaning Representation (AMR) graphs using pre-trained sequence-to-sequence Transformer models has recently led to large improvements on AMR parsing benchmarks. These parsers are simple and avoid explicit modeling of str
Conditional Random Field (CRF) based neural models are among the most performant methods for solving sequence labeling problems. Despite its great success, CRF has the shortcoming of occasionally generating illegal sequences of tags, e.g. sequences c
Complex natural language applications such as speech translation or pivot translation traditionally rely on cascaded models. However,cascaded models are known to be prone to error propagation and model discrepancy problems. Furthermore, there is no p
Opinion target extraction and opinion term extraction are two fundamental tasks in Aspect Based Sentiment Analysis (ABSA). Many recent works on ABSA focus on Target-oriented Opinion Words (or Terms) Extraction (TOWE), which aims at extracting the cor
Current benchmark tasks for natural language processing contain text that is qualitatively different from the text used in informal day to day digital communication. This discrepancy has led to severe performance degradation of state-of-the-art NLP m