باللغة العربية، يتم استخدام علامات التشكيل لتحديد المعاني وكذلك النطق.ومع ذلك، غالبا ما يتم حذف الدروع من النصوص المكتوبة، مما يزيد من عدد المعاني والنطوقتين المحتملة.هذا يؤدي إلى نص غامض ويجعل العملية الحسابية على النص غير المسموح به أكثر صعوبة.في هذه الورقة، نقترح نموذج إعماري لغوي للترشف عن النص العربي (لاماد).في لاماد، يتم تقديم تمثيل ميزة لغوية جديدة، والذي يستخدم كل من ملامح الكلمة والأحرف السياقية.بعد ذلك، يقترح آلية الاهتمام اللغوي التقاط الميزات اللغوية المهمة.بالإضافة إلى ذلك، نستكشف تأثير الميزات اللغوية المستخرجة من النص على درج النص العربي (ATD) عن طريق إدخالها لآلية الاهتمام اللغوي.توضح النتائج التجريبية الواسعة على ثلاث مجموعات بيانات بأحجام مختلفة أن لاماد تتفوق على النماذج الحالية للحالة.
In Arabic Language, diacritics are used to specify meanings as well as pronunciations. However, diacritics are often omitted from written texts, which increases the number of possible meanings and pronunciations. This leads to an ambiguous text and makes the computational process on undiacritized text more difficult. In this paper, we propose a Linguistic Attentional Model for Arabic text Diacritization (LAMAD). In LAMAD, a new linguistic feature representation is presented, which utilizes both word and character contextual features. Then, a linguistic attention mechanism is proposed to capture the important linguistic features. In addition, we explore the impact of the linguistic features extracted from the text on Arabic text diacritization (ATD) by introducing them to the linguistic attention mechanism. The extensive experimental results on three datasets with different sizes illustrate that LAMAD outperforms the existing state-of-the-art models.
References used
https://aclanthology.org/
A computationally expensive and memory intensive neural network lies behind the recent success of language representation learning. Knowledge distillation, a major technique for deploying such a vast language model in resource-scarce environments, tr
Abstract Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words
With the increasing use of machine-learning driven algorithmic judgements, it is critical to develop models that are robust to evolving or manipulated inputs. We propose an extensive analysis of model robustness against linguistic variation in the se
Language models have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a language model a
The complexity loss paradox, which posits that individuals suffering from disease exhibit surprisingly predictable behavioral dynamics, has been observed in a variety of both human and animal physiological systems. The recent advent of online text-ba