Do you want to publish a course? Click here

Classification of Code-Mixed Text Using Capsule Networks

تصنيف النص المختلط من التعليمات البرمجية باستخدام شبكات الكبسولة

349   0   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

A major challenge in analysing social me-dia data belonging to languages that use non-English script is its code-mixed nature. Recentresearch has presented state-of-the-art contex-tual embedding models (both monolingual s.a.BERT and multilingual s.a.XLM-R) as apromising approach. In this paper, we showthat the performance of such embedding mod-els depends on multiple factors, such as thelevel of code-mixing in the dataset, and thesize of the training dataset. We empiricallyshow that a newly introduced Capsule+biGRUclassifier could outperform a classifier built onthe English-BERT as well as XLM-R just witha training dataset of about 6500 samples forthe Sinhala-English code-mixed data.



References used
https://aclanthology.org/
rate research

Read More

Text generation is a highly active area of research in the computational linguistic community. The evaluation of the generated text is a challenging task and multiple theories and metrics have been proposed over the years. Unfortunately, text generat ion and evaluation are relatively understudied due to the scarcity of high-quality resources in code-mixed languages where the words and phrases from multiple languages are mixed in a single utterance of text and speech. To address this challenge, we present a corpus (HinGE) for a widely popular code-mixed language Hinglish (code-mixing of Hindi and English languages). HinGE has Hinglish sentences generated by humans as well as two rule-based algorithms corresponding to the parallel Hindi-English sentences. In addition, we demonstrate the in- efficacy of widely-used evaluation metrics on the code-mixed data. The HinGE dataset will facilitate the progress of natural language generation research in code-mixed languages.
أصبحت قضية استرجاع المعلومات في يومنا هذا من أهم القضايا والتحدّيات التي تشغل العالم كنتيجة منطقية للتطوّر التكنولوجي المتسارع والتقدم الهائل في الفكر الإنساني والبحوث والدراسات العلمية في شتى فروع المعرفة وما رافقه من ازدياد في كميات المعلومات إلى ح دّ يصعب التحكم بها والتعامل معها. لذا نهدف في مشروعنا إلى تقديم نظام استرجاع معلومات يقوم بتصنيف المستندات حسب محتواها إلا أن عمليّة استرجاع المعلومات تحوي درجة من عدم التأكد في كل مرحلة من مراحلها لذا اعتمدنا على شبكات بيز للقيام بعملية التصنيف وهي شبكات احتماليّة تحوّل المعلومات إلى علاقات cause-and-effect و تعتبر واحدة من أهم الطرق الواعدة لمعالجة حالة عدم التأكد . في البدء نقوم بالتعريف بأساسيّات شبكات بيز ونشرح مجموعة من خوارزميّات بنائها وخوارزميّات الاستدلال المستخدمة ( ولها نوعان دقيق وتقريبي). يقوم هذه النظام بإجراء مجموعة من عمليّات المعالجة الأوليّة لنصوص المستندات ثم تطبيق عمليات إحصائية واحتمالية في مرحلة تدريب النظام والحصول على بنية شبكة بيز الموافقة لبيانات التدريب و يتم تصنيف مستند مدخل باستخدام مجموعة من خوارزميات الاستدلال الدقيق في شبكة بيز الناتجة لدينا. بما أنّ أداء أي نظام استرجاع معلومات عادة ما يزداد دقّة عند استخدام العلاقات بين المفردات (terms) المتضمّنة في مجموعة مستندات فسنأخذ بعين الاعتبار نوعين من العلاقات في بناء الشبكة: 1- العلاقات بين المفردات(terms). 2- العلاقات بين المفردات والأصناف(classes).
Text classifiers are regularly applied to personal texts, leaving users of these classifiers vulnerable to privacy breaches. We propose a solution for privacy-preserving text classification that is based on Convolutional Neural Networks (CNNs) and Se cure Multiparty Computation (MPC). Our method enables the inference of a class label for a personal text in such a way that (1) the owner of the personal text does not have to disclose their text to anyone in an unencrypted manner, and (2) the owner of the text classifier does not have to reveal the trained model parameters to the text owner or to anyone else. To demonstrate the feasibility of our protocol for practical private text classification, we implemented it in the PyTorch-based MPC framework CrypTen, using a well-known additive secret sharing scheme in the honest-but-curious setting. We test the runtime of our privacy-preserving text classifier, which is fast enough to be used in practice.
We present CoTexT, a pre-trained, transformer-based encoder-decoder model that learns the representative context between natural language (NL) and programming language (PL). Using self-supervision, CoTexT is pre-trained on large programming language corpora to learn a general understanding of language and code. CoTexT supports downstream NL-PL tasks such as code summarizing/documentation, code generation, defect detection, and code debugging. We train CoTexT on different combinations of available PL corpus including both bimodal'' and unimodal'' data. Here, bimodal data is the combination of text and corresponding code snippets, whereas unimodal data is merely code snippets. We first evaluate CoTexT with multi-task learning: we perform Code Summarization on 6 different programming languages and Code Refinement on both small and medium size featured in the CodeXGLUE dataset. We further conduct extensive experiments to investigate CoTexT on other tasks within the CodeXGlue dataset, including Code Generation and Defect Detection. We consistently achieve SOTA results in these tasks, demonstrating the versatility of our models.
Pre-trained language-vision models have shown remarkable performance on the visual question answering (VQA) task. However, most pre-trained models are trained by only considering monolingual learning, especially the resource-rich language like Englis h. Training such models for multilingual setups demand high computing resources and multilingual language-vision dataset which hinders their application in practice. To alleviate these challenges, we propose a knowledge distillation approach to extend an English language-vision model (teacher) into an equally effective multilingual and code-mixed model (student). Unlike the existing knowledge distillation methods, which only use the output from the last layer of the teacher network for distillation, our student model learns and imitates the teacher from multiple intermediate layers (language and vision encoders) with appropriately designed distillation objectives for incremental knowledge extraction. We also create the large-scale multilingual and code-mixed VQA dataset in eleven different language setups considering the multiple Indian and European languages. Experimental results and in-depth analysis show the effectiveness of the proposed VQA model over the pre-trained language-vision models on eleven diverse language setups.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا