الاستعارة جزء لا غنى عنه من الإدراك البشري والاتصال اليومي.تم إجراء الكثير من الأبحاث توضيح معالجة الاستعارة في العقل / الدماغ والدور الذي يلعبه في التواصل.في السنوات الأخيرة، استفادت أنظمة تجهيز الاستعارة إلى حد كبير من هذه الدراسات، وكذلك التقدم السريع في التعلم العميق لمعالجة اللغة الطبيعية (NLP).توفر هذه الورقة مراجعة شاملة ومناقشة التطورات الأخيرة في معالجة الاستعارة الآلية، في ضوء النتائج حول استعارة في العقل واللغة والاتصالات، ومن منظور مهام NLP المصب.
Metaphor is an indispensable part of human cognition and everyday communication. Much research has been conducted elucidating metaphor processing in the mind/brain and the role it plays in communication. in recent years, metaphor processing systems have benefited greatly from these studies, as well as the rapid advances in deep learning for natural language processing (NLP). This paper provides a comprehensive review and discussion of recent developments in automated metaphor processing, in light of the findings about metaphor in the mind, language, and communication, and from the perspective of downstream NLP tasks.
References used
https://aclanthology.org/
We present new results for the problem of sequence metaphor labeling, using the recently developed Visibility Embeddings. We show that concatenating such embeddings to the input of a BiLSTM obtains consistent and significant improvements at almost no cost, and we present further improved results when visibility embeddings are combined with BERT.
Zero-shot translation, directly translating between language pairs unseen in training, is a promising capability of multilingual neural machine translation (NMT). However, it usually suffers from capturing spurious correlations between the output lan
Pretraining-based neural network models have demonstrated state-of-the-art (SOTA) performances on natural language processing (NLP) tasks. The most frequently used sentence representation for neural-based NLP methods is a sequence of subwords that is
The field of NLP has made substantial progress in building meaning representations. However, an important aspect of linguistic meaning, social meaning, has been largely overlooked. We introduce the concept of social meaning to NLP and discuss how insights from sociolinguistics can inform work on representation learning in NLP. We also identify key challenges for this new line of research.
Deep neural networks and huge language models are becoming omnipresent in natural language applications. As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settin