Do you want to publish a course? Click here

Multi-Label Classification of Chinese Humor Texts Using Hypergraph Attention Networks

التصنيف متعدد العلامات من النصوص الفكاهة الصينية باستخدام شبكات اهتمام Hypergraph

349   0   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

We use Hypergraph Attention Networks (HyperGAT) to recognize multiple labels of Chinese humor texts. We firstly represent a joke as a hypergraph. The sequential hyperedge and semantic hyperedge structures are used to construct hyperedges. Then, attention mechanisms are adopted to aggregate context information embedded in nodes and hyperedges. Finally, we use trained HyperGAT to complete the multi-label classification task. Experimental results on the Chinese humor multi-label dataset showed that HyperGAT model outperforms previous sequence-based (CNN, BiLSTM, FastText) and graph-based (Graph-CNN, TextGCN, Text Level GNN) deep learning models.



References used
https://aclanthology.org/
rate research

Read More

In this study, we study language change in Chinese Biji by using a classification task: classifying Ancient Chinese texts by time periods. Specifically, we focus on a unique genre in classical Chinese literature: Biji (literally notebook'' or brush n otes''), i.e., collections of anecdotes, quotations, etc., anything authors consider noteworthy, Biji span hundreds of years across many dynasties and conserve informal language in written form. For these reasons, they are regarded as a good resource for investigating language change in Chinese (Fang, 2010). In this paper, we create a new dataset of 108 Biji across four dynasties. Based on the dataset, we first introduce a time period classification task for Chinese. Then we investigate different feature representation methods for classification. The results show that models using contextualized embeddings perform best. An analysis of the top features chosen by the word n-gram model (after bleaching proper nouns) confirms that these features are informative and correspond to observations and assumptions made by historical linguists.
In this paper, we propose a knowledge infusion mechanism to incorporate domain knowledge into language transformers. Weakly supervised data is regarded as the main source for knowledge acquisition. We pre-train the language models to capture masked k nowledge of focuses and aspects and then fine-tune them to obtain better performance on the downstream tasks. Due to the lack of publicly available datasets for multi-label classification of Chinese medical questions, we crawled questions from medical question/answer forums and manually annotated them using eight predefined classes: persons and organizations, symptom, cause, examination, disease, information, ingredient, and treatment. Finally, a total of 1,814 questions with 2,340 labels. Each question contains an average of 1.29 labels. We used Baidu Medical Encyclopedia as the knowledge resource. Two transformers BERT and RoBERTa were implemented to compare performance on our constructed datasets. Experimental results showed that our proposed model with knowledge infusion mechanism can achieve better performance, no matter which evaluation metric including Macro F1, Micro F1, Weighted F1 or Subset Accuracy were considered.
In the growth of today's world and advanced technology, social media networks play a significant role in impacting human lives. Censorship is the overthrowing of speech, public transmission, or other details that play a vast role in social media. The content may be considered harmful, sensitive, or inconvenient. Authorities like institutes, governments, and other organizations conduct Censorship. This paper has implemented a model that helps classify censored and uncensored tweets as a binary classification. The paper describes submission to the Censorship shared task of the NLP4IF 2021 workshop. We used various transformer-based pre-trained models, and XLNet outputs a better accuracy among all. We fine-tuned the model for better performance and achieved a reasonable accuracy, and calculated other performance metrics.
Concept normalization of clinical texts to standard medical classifications and ontologies is a task with high importance for healthcare and medical research. We attempt to solve this problem through automatic SNOMED CT encoding, where SNOMED CT is o ne of the most widely used and comprehensive clinical term ontologies. Applying basic Deep Learning models, however, leads to undesirable results due to the unbalanced nature of the data and the extreme number of classes. We propose a classification procedure that features a multiple-step workflow consisting of label clustering, multi-cluster classification, and clusters-to-labels mapping. For multi-cluster classification, BioBERT is fine-tuned over our custom dataset. The clusters-to-labels mapping is carried out by a one-vs-all classifier (SVC) applied to every single cluster. We also present the steps for automatic dataset generation of textual descriptions annotated with SNOMED CT codes based on public data and linked open data. In order to cope with the problem that our dataset is highly unbalanced, some data augmentation methods are applied. The results from the conducted experiments show high accuracy and reliability of our approach for prediction of SNOMED CT codes relevant to a clinical text.
أصبحت قضية استرجاع المعلومات في يومنا هذا من أهم القضايا والتحدّيات التي تشغل العالم كنتيجة منطقية للتطوّر التكنولوجي المتسارع والتقدم الهائل في الفكر الإنساني والبحوث والدراسات العلمية في شتى فروع المعرفة وما رافقه من ازدياد في كميات المعلومات إلى ح دّ يصعب التحكم بها والتعامل معها. لذا نهدف في مشروعنا إلى تقديم نظام استرجاع معلومات يقوم بتصنيف المستندات حسب محتواها إلا أن عمليّة استرجاع المعلومات تحوي درجة من عدم التأكد في كل مرحلة من مراحلها لذا اعتمدنا على شبكات بيز للقيام بعملية التصنيف وهي شبكات احتماليّة تحوّل المعلومات إلى علاقات cause-and-effect و تعتبر واحدة من أهم الطرق الواعدة لمعالجة حالة عدم التأكد . في البدء نقوم بالتعريف بأساسيّات شبكات بيز ونشرح مجموعة من خوارزميّات بنائها وخوارزميّات الاستدلال المستخدمة ( ولها نوعان دقيق وتقريبي). يقوم هذه النظام بإجراء مجموعة من عمليّات المعالجة الأوليّة لنصوص المستندات ثم تطبيق عمليات إحصائية واحتمالية في مرحلة تدريب النظام والحصول على بنية شبكة بيز الموافقة لبيانات التدريب و يتم تصنيف مستند مدخل باستخدام مجموعة من خوارزميات الاستدلال الدقيق في شبكة بيز الناتجة لدينا. بما أنّ أداء أي نظام استرجاع معلومات عادة ما يزداد دقّة عند استخدام العلاقات بين المفردات (terms) المتضمّنة في مجموعة مستندات فسنأخذ بعين الاعتبار نوعين من العلاقات في بناء الشبكة: 1- العلاقات بين المفردات(terms). 2- العلاقات بين المفردات والأصناف(classes).

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا