Do you want to publish a course? Click here

Correcting Texts Generated by Transformers using Discourse Features and Web Mining

تصحيح النصوص الناتجة عن المحولات باستخدام ميزات الخطاب وتعدين الويب

311   1   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

Recent transformer-based approaches to NLG like GPT-2 can generate syntactically coherent original texts. However, these generated texts have serious flaws: global discourse incoherence and meaninglessness of sentences in terms of entity values. We address both of these flaws: they are independent but can be combined to generate original texts that will be both consistent and truthful. This paper presents an approach to estimate the quality of discourse structure. Empirical results confirm that the discourse structure of currently generated texts is inaccurate. We propose the research directions to correct it using discourse features during the fine-tuning procedure. The suggested approach is universal and can be applied to different languages. Apart from that, we suggest a method to correct wrong entity values based on Web Mining and text alignment.



References used
https://aclanthology.org/
rate research

Read More

We describe our approach for SemEval-2021 task 6 on detection of persuasion techniques in multimodal content (memes). Our system combines pretrained multimodal models (CLIP) and chained classifiers. Also, we propose to enrich the data by a data augmentation technique. Our submission achieves a rank of 8/16 in terms of F1-micro and 9/16 with F1-macro on the test set.
The aim of this investigation is to explore the main rhetorical features of an Arabic newspaper discourse. To this end, extracts form two popular Jordanian newspapers were analyzed. The results of this study indicate that one of the features of th is type of discourse is redundancy, i.e. repetition of the same lexical item. Another feature is the explicit use of evaluative statements to support the writer’s point of view. Moreover, the results of this study revealed that Arabic newspaper discourse clearly marks clause relations especially subordinating clauses, and that discourse markers are mainly used to mark the relationships of contrast between or among propositions in this type of discourse.
Sensitivity of deep-neural models to input noise is known to be a challenging problem. In NLP, model performance often deteriorates with naturally occurring noise, such as spelling errors. To mitigate this issue, models may leverage artificially nois ed data. However, the amount and type of generated noise has so far been determined arbitrarily. We therefore propose to model the errors statistically from grammatical-error-correction corpora. We present a thorough evaluation of several state-of-the-art NLP systems' robustness in multiple languages, with tasks including morpho-syntactic analysis, named entity recognition, neural machine translation, a subset of the GLUE benchmark and reading comprehension. We also compare two approaches to address the performance drop: a) training the NLP models with noised data generated by our framework; and b) reducing the input noise with external system for natural language correction. The code is released at https://github.com/ufal/kazitext.
This paper describes our approach (IIITH) for SemEval-2021 Task 5: HaHackathon: Detecting and Rating Humor and Offense. Our results focus on two major objectives: (i) Effect of task adaptive pretraining on the performance of transformer based models (ii) How does lexical and hurtlex features help in quantifying humour and offense. In this paper, we provide a detailed description of our approach along with comparisions mentioned above.
Service Oriented Computing (SOC) is changing the way of developing software systems. Each web service has a specific purpose to serve, so it can not satisfy users’ request. In this paper, we propose a Web services composition method based on OWL on tology, and design an automatic system model for services discovery and composition. This method uses domain ontology and WordNet to calculate matching between input and output parameters and uses Category ontology to solve the problem of semantic heterogeneity in web service description. We use services with single input and single output and cost as QoS criteria. This method can enhance the efficiency and accuracy of service composition, and the experiments are used to validate and analyze the proposed system.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا