Do you want to publish a course? Click here

Application of Mix-Up Method in Document Classification Task Using BERT

تطبيق طريقة المزيج في مهمة تصنيف المستندات باستخدام Bert

327   0   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

The mix-up method (Zhang et al., 2017), one of the methods for data augmentation, is known to be easy to implement and highly effective. Although the mix-up method is intended for image identification, it can also be applied to natural language processing. In this paper, we attempt to apply the mix-up method to a document classification task using bidirectional encoder representations from transformers (BERT) (Devlin et al., 2018). Since BERT allows for two-sentence input, we concatenated word sequences from two documents with different labels and used the multi-class output as the supervised data with a one-hot vector. In an experiment using the livedoor news corpus, which is Japanese, we compared the accuracy of document classification using two methods for selecting documents to be concatenated with that of ordinary document classification. As a result, we found that the proposed method is better than the normal classification when the documents with labels shortages are mixed preferentially. This indicates that how to choose documents for mix-up has a significant impact on the results.



References used
https://aclanthology.org/
rate research

Read More

أصبحت قضية استرجاع المعلومات في يومنا هذا من أهم القضايا والتحدّيات التي تشغل العالم كنتيجة منطقية للتطوّر التكنولوجي المتسارع والتقدم الهائل في الفكر الإنساني والبحوث والدراسات العلمية في شتى فروع المعرفة وما رافقه من ازدياد في كميات المعلومات إلى ح دّ يصعب التحكم بها والتعامل معها. لذا نهدف في مشروعنا إلى تقديم نظام استرجاع معلومات يقوم بتصنيف المستندات حسب محتواها إلا أن عمليّة استرجاع المعلومات تحوي درجة من عدم التأكد في كل مرحلة من مراحلها لذا اعتمدنا على شبكات بيز للقيام بعملية التصنيف وهي شبكات احتماليّة تحوّل المعلومات إلى علاقات cause-and-effect و تعتبر واحدة من أهم الطرق الواعدة لمعالجة حالة عدم التأكد . في البدء نقوم بالتعريف بأساسيّات شبكات بيز ونشرح مجموعة من خوارزميّات بنائها وخوارزميّات الاستدلال المستخدمة ( ولها نوعان دقيق وتقريبي). يقوم هذه النظام بإجراء مجموعة من عمليّات المعالجة الأوليّة لنصوص المستندات ثم تطبيق عمليات إحصائية واحتمالية في مرحلة تدريب النظام والحصول على بنية شبكة بيز الموافقة لبيانات التدريب و يتم تصنيف مستند مدخل باستخدام مجموعة من خوارزميات الاستدلال الدقيق في شبكة بيز الناتجة لدينا. بما أنّ أداء أي نظام استرجاع معلومات عادة ما يزداد دقّة عند استخدام العلاقات بين المفردات (terms) المتضمّنة في مجموعة مستندات فسنأخذ بعين الاعتبار نوعين من العلاقات في بناء الشبكة: 1- العلاقات بين المفردات(terms). 2- العلاقات بين المفردات والأصناف(classes).
The International Classification of Diseases (ICD) is a system for systematically recording patients' diagnoses. Clinicians or professional coders assign ICD codes to patients' medical records to facilitate funding, research, and administration. In m ost health facilities, clinical coding is a manual, time-demanding task that is prone to errors. A tool that automatically assigns ICD codes to free-text clinical notes could save time and reduce erroneous coding. While many previous studies have focused on ICD coding, research on Swedish patient records is scarce. This study explored different approaches to pairing Swedish clinical notes with ICD codes. KB-BERT, a BERT model pre-trained on Swedish text, was compared to the traditional supervised learning models Support Vector Machines, Decision Trees, and K-nearest Neighbours used as the baseline. When considering ICD codes grouped into ten blocks, the KB-BERT was superior to the baseline models, obtaining an F1-micro of 0.80 and an F1-macro of 0.58. When considering the 263 full ICD codes, the KB-BERT was outperformed by all baseline models at an F1-micro and F1-macro of zero. Wilcoxon signed-rank tests showed that the performance differences between the KB-BERT and the baseline models were statistically significant.
In this work, we present a method for content selection and document planning for automated news and report generation from structured statistical data such as that offered by the European Union's statistical agency, EuroStat. The method is driven by the data and is highly topic-independent within the statistical dataset domain. As our approach is not based on machine learning, it is suitable for introducing news automation to the wide variety of domains where no training data is available. As such, it is suitable as a low-cost (in terms of implementation effort) baseline for document structuring prior to introduction of domain-specific knowledge.
Bidirectional Encoder Representations from Transformers (BERT) has achieved state-of-the-art performances on several text classification tasks, such as GLUE and sentiment analysis. Recent work in the legal domain started to use BERT on tasks, such as legal judgement prediction and violation prediction. A common practise in using BERT is to fine-tune a pre-trained model on a target task and truncate the input texts to the size of the BERT input (e.g. at most 512 tokens). However, due to the unique characteristics of legal documents, it is not clear how to effectively adapt BERT in the legal domain. In this work, we investigate how to deal with long documents, and how is the importance of pre-training on documents from the same domain as the target task. We conduct experiments on the two recent datasets: ECHR Violation Dataset and the Overruling Task Dataset, which are multi-label and binary classification tasks, respectively. Importantly, on average the number of tokens in a document from the ECHR Violation Dataset is more than 1,600. While the documents in the Overruling Task Dataset are shorter (the maximum number of tokens is 204). We thoroughly compare several techniques for adapting BERT on long documents and compare different models pre-trained on the legal and other domains. Our experimental results show that we need to explicitly adapt BERT to handle long documents, as the truncation leads to less effective performance. We also found that pre-training on the documents that are similar to the target task would result in more effective performance on several scenario.
This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, cal led DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا