تم تطبيق نماذج تجزئة الكلمات القائمة على الأحرف على نطاق واسع على اللغات الشاقة، بما في ذلك التايلاندية، بسبب أدائها العالي.هذه النماذج تقدر حدود الكلمات من تسلسل الأحرف.ومع ذلك، فإن وحدة الأحرف في تسلسل ليس لها معنى أساسي، مقارنة بكل وحدات الكتلة الكلمة والكلمة الفرعية.نقترح نموذج تجزئة الكلمات التايلاندية يستخدم أنواعا مختلفة من المعلومات، بما في ذلك الكلمات والكلمات الفرعية ومجموعات الأحرف، من تسلسل الأحرف.ينطبق نموذجنا على انتباه متعددة لتحسين استنتاجات تجزئة من خلال تقدير العلاقات الكبيرة بين الشخصيات وأنواع الوحدات المختلفة.تشير النتائج التجريبية إلى أن نموذجنا يمكن أن يتفوق على نماذج تجزئة الكلمات التايلاندية الأخرى.
Character-based word-segmentation models have been extensively applied to agglutinative languages, including Thai, due to their high performance. These models estimate word boundaries from a character sequence. However, a character unit in sequences has no essential meaning, compared with word, subword, and character cluster units. We propose a Thai word-segmentation model that uses various types of information, including words, subwords, and character clusters, from a character sequence. Our model applies multiple attentions to refine segmentation inferences by estimating the significant relationships among characters and various unit types. The experimental results indicate that our model can outperform other state-of-the-art Thai word-segmentation models.
References used
https://aclanthology.org/
Recent state-of-the-art (SOTA) effective neural network methods and fine-tuning methods based on pre-trained models (PTM) have been used in Chinese word segmentation (CWS), and they achieve great results. However, previous works focus on training the
Chinese character decomposition has been used as a feature to enhance Machine Translation (MT) models, combining radicals into character and word level models. Recent work has investigated ideograph or stroke level embedding. However, questions remai
Logical Observation Identifiers Names and Codes (LOINC) is a standard set of codes that enable clinicians to communicate about medical tests. Laboratories depend on LOINC to identify what tests a doctor orders for a patient. However, clinicians often
Recent researches show that pre-trained models (PTMs) are beneficial to Chinese Word Segmentation (CWS). However, PTMs used in previous works usually adopt language modeling as pre-training tasks, lacking task-specific prior segmentation knowledge an
Dravidian languages, such as Kannada and Tamil, are notoriously difficult to translate by state-of-the-art neural models. This stems from the fact that these languages are morphologically very rich as well as being low-resourced. In this paper, we fo