محادثات طبيعية مليئة التدقيق.تحقق هذه الدراسة إذا وتفهم برت وكيفية التنقيس بثلاث تجارب: (1) دراسة سلوكية باستخدام مهمة نهرية، (2) تحليل ل Aregbeddings و (3) تحليل لآلية الاهتمام على التنقيس.توضح الدراسة السلوكية أنه بدون ضبط جيد على البيانات النظافة، لا يعاني بيرت خسارة كبيرة من الأداء عند تقديمها مقارنة بالمدخلات بطلاقة (EXP1).يكشف التحليل على أزواج الجملة الجماعية والجوزاء بطلاقة أن الطبقة الأعمق، كلما زاد مماثلة تمثيلها (EXP2).يشير هذا إلى أن الطبقات العميقة من بيرت تصبح ثابتا نسبيا للتنقيس.نحن نحدد الاهتمام كآلية محتملة يمكن أن تفسر هذه الظاهرة (EXP3).بشكل عام، تشير الدراسة إلى أن بيرت لديه معرفة بنية التنظير.نؤكد على إمكانية استخدام بيرت لفهم الكلام الطبيعي دون إزالة التنظير.
Natural conversations are filled with disfluencies. This study investigates if and how BERT understands disfluency with three experiments: (1) a behavioural study using a downstream task, (2) an analysis of sentence embeddings and (3) an analysis of the attention mechanism on disfluency. The behavioural study shows that without fine-tuning on disfluent data, BERT does not suffer significant performance loss when presented disfluent compared to fluent inputs (exp1). Analysis on sentence embeddings of disfluent and fluent sentence pairs reveals that the deeper the layer, the more similar their representation (exp2). This indicates that deep layers of BERT become relatively invariant to disfluency. We pinpoint attention as a potential mechanism that could explain this phenomenon (exp3). Overall, the study suggests that BERT has knowledge of disfluency structure. We emphasise the potential of using BERT to understand natural utterances without disfluency removal.
References used
https://aclanthology.org/
Understanding idioms is important in NLP. In this paper, we study to what extent pre-trained BERT model can encode the meaning of a potentially idiomatic expression (PIE) in a certain context. We make use of a few existing datasets and perform two pr
As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, it is crucial to ensure that these models are robust. One way of improving model robustness is to generate counterfactually augmented data
It is widely accepted that fine-tuning pre-trained language models usually brings about performance improvements in downstream tasks. However, there are limited studies on the reasons behind this effectiveness, particularly from the viewpoint of stru
The mapping of lexical meanings to wordforms is a major feature of natural languages. While usage pressures might assign short words to frequent meanings (Zipf's law of abbreviation), the need for a productive and open-ended vocabulary, local constra
Transformer-based models have become the de facto standard in the field of Natural Language Processing (NLP). By leveraging large unlabeled text corpora, they enable efficient transfer learning leading to state-of-the-art results on numerous NLP task