يتم تعريف Sememes على أنها الوحدات الذرية لوصف المعنى الدلالي للمفاهيم.نظرا لصعوبة التعليق يدويا في التسجيل يدويا واستنادا إلى التعليق بين الخبراء، فقد تم اقتراح مهمة تنبؤات النظرة المعجمية.ومع ذلك، فإن الأساليب السابقة تعتمد بشدة على Word أو Arttersdings، وتجاهل المعلومات المحبوسة الدقيقة.في هذه الورقة، نقترح طريقة رواية ما قبل التدريب والتي تم تصميمها لتحسين دمج المعلومات الداخلية للشخصية الصينية.يتم استخدام تمثيل الأحرف الصيني المحسنة Glyph (دول مجلس التعاون الخليجي) لمساعدة تنبؤة النظر.نقوم بتجربة وتقييم النموذج لدينا على HOWNET، وهو قاعدة المعرفة الشمالية الشهيرة.تظهر النتائج التجريبية أن أسلوبنا تتفوق على نماذج المعلومات غير الخارجية الموجودة.
Sememes are defined as the atomic units to describe the semantic meaning of concepts. Due to the difficulty of manually annotating sememes and the inconsistency of annotations between experts, the lexical sememe prediction task has been proposed. However, previous methods heavily rely on word or character embeddings, and ignore the fine-grained information. In this paper, we propose a novel pre-training method which is designed to better incorporate the internal information of Chinese character. The Glyph enhanced Chinese Character representation (GCC) is used to assist sememe prediction. We experiment and evaluate our model on HowNet, which is a famous sememe knowledge base. The experimental results show that our method outperforms existing non-external information models.
References used
https://aclanthology.org/
Chinese character decomposition has been used as a feature to enhance Machine Translation (MT) models, combining radicals into character and word level models. Recent work has investigated ideograph or stroke level embedding. However, questions remai
Prior work on Data-To-Text Generation, the task of converting knowledge graph (KG) triples into natural text, focused on domain-specific benchmark datasets. In this paper, however, we verbalize the entire English Wikidata KG, and discuss the unique c
Due to the increasing concerns for data privacy, source-free unsupervised domain adaptation attracts more and more research attention, where only a trained source model is assumed to be available, while the labeled source data remain private. To get
Multilingual pre-trained models have demonstrated their effectiveness in many multilingual NLP tasks and enabled zero-shot or few-shot transfer from high-resource languages to low-resource ones. However, due to significant typological differences and
Predicting the complexity level of a word or a phrase is considered a challenging task. It is even recognized as a crucial step in numerous NLP applications, such as text rearrangements and text simplification. Early research treated the task as a bi