في لغات parataxis مثل الصينية، يتم بناء معاني الكلمات باستخدام تكوينات كلمات محددة، والتي يمكن أن تساعد في إزالة حواس الكلمات.ومع ذلك، نادرا ما يتم استكشاف هذه المعرفة في أساليب Disambiguation Sense (WSD) السابقة.في هذه الورقة، نقترح نفايات المعرفة بتكوين النصوص لتعزيز WSD الصيني.نحن أولا بناء مجموعة بيانات WSD الصينية على نطاق واسع مع تكوينات الكلمات.بعد ذلك، نقترح نموذج صيغة نموذجية لإدماج تكوين الكلمات بشكل صريح في حالة إزالة الغموض.لتعزيز التعميم، نقوم بتصميم وحدة تنقية تكوين كلمة في حالة تعويض تكوين كلمة غير متوفرة.تظهر النتائج التجريبية أن طريقتنا تجلب تحسن كبير في الأداء على خطوط خطوط خطوط خطوط طويلة.
In parataxis languages like Chinese, word meanings are constructed using specific word-formations, which can help to disambiguate word senses. However, such knowledge is rarely explored in previous word sense disambiguation (WSD) methods. In this paper, we propose to leverage word-formation knowledge to enhance Chinese WSD. We first construct a large-scale Chinese lexical sample WSD dataset with word-formations. Then, we propose a model FormBERT to explicitly incorporate word-formations into sense disambiguation. To further enhance generalizability, we design a word-formation predictor module in case word-formation annotations are unavailable. Experimental results show that our method brings substantial performance improvement over strong baselines.
References used
https://aclanthology.org/
Recent researches show that pre-trained models (PTMs) are beneficial to Chinese Word Segmentation (CWS). However, PTMs used in previous works usually adopt language modeling as pre-training tasks, lacking task-specific prior segmentation knowledge an
Supervised approaches usually achieve the best performance in the Word Sense Disambiguation problem. However, the unavailability of large sense annotated corpora for many low-resource languages make these approaches inapplicable for them in practice.
Emphasis Selection is a newly proposed task which focuses on choosing words for emphasis in short sentences. Traditional methods only consider the sequence information of a sentence while ignoring the rich sentence structure and word relationship inf
Supervised systems have nowadays become the standard recipe for Word Sense Disambiguation (WSD), with Transformer-based language models as their primary ingredient. However, while these systems have certainly attained unprecedented performances, virt
Gender bias in word embeddings gradually becomes a vivid research field in recent years. Most studies in this field aim at measurement and debiasing methods with English as the target language. This paper investigates gender bias in static word embed