تصنيف النص القصير هو مهمة أساسية في معالجة اللغة الطبيعية.من الصعب بسبب عدم وجود معلومات السياق والبيانات المسمى في الممارسة العملية.في هذه الورقة، نقترح طريقة جديدة تسمى SHINE، والتي تعتمد على الشبكة العصبية الرسم البيانية (GNN)، لتصنيف النص القصير.أولا، نقوم بنمذت مجموعة بيانات النص القصيرة كشركة بيانية غير متجانسة هرمية تتكون من رسومات مكونة على مستوى Word والتي تقدم معلومات أكثر دلالة ونقصية.بعد ذلك، نتعلم ديناميكيا رسم بياني مستند قصير يسهل نشر الملصقات الفعالة بين النصوص القصيرة المشابهات.وبالتالي، فإن المقارنة مع الأساليب القائمة على GNN القائمة، والتألق يمكن أن يستغل أفضل التفاعلات بين العقد من نفس الأنواع والقبض على أوجه التشابه بين النصوص القصيرة.تظهر تجارب واسعة النطاق على مختلف مجموعات البيانات القصيرة القصيرة المعجمية أن التألق يتفوق باستمرار على الأساليب الحديثة، خاصة مع عدد أقل من الملصقات.
Short text classification is a fundamental task in natural language processing. It is hard due to the lack of context information and labeled data in practice. In this paper, we propose a new method called SHINE, which is based on graph neural network (GNN), for short text classification. First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs which introduce more semantic and syntactic information. Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts. Thus, comparing with existing GNN-based methods, SHINE can better exploit interactions between nodes of the same types and capture similarities between short texts. Extensive experiments on various benchmark short text datasets show that SHINE consistently outperforms state-of-the-art methods, especially with fewer labels.
References used
https://aclanthology.org/
In cross-lingual text classification, it is required that task-specific training data in high-resource source languages are available, where the task is identical to that of a low-resource target language. However, collecting such training data can b
Graph convolutional networks (GCNs) have been applied recently to text classification and produced an excellent performance. However, existing GCN-based methods do not assume an explicit latent semantic structure of documents, making learned represen
The current state-of-the-art model HiAGM for hierarchical text classification has two limitations. First, it correlates each text sample with all labels in the dataset which contains irrelevant information. Second, it does not consider any statistica
Recent work on aspect-level sentiment classification has demonstrated the efficacy of incorporating syntactic structures such as dependency trees with graph neural networks (GNN), but these approaches are usually vulnerable to parsing errors. To bett
Meta-learning has achieved great success in leveraging the historical learned knowledge to facilitate the learning process of the new task. However, merely learning the knowledge from the historical tasks, adopted by current meta-learning algorithms,