Do you want to publish a course? Click here

Micro-Textonymic Transformations & Recreation of Collocation In Translation

التحوّلات النّصية الدّقيقة و إعادة صياغة نظم الكلام في التّرجمة

1998   4   108   0 ( 0 )
 Publication date 2013
  fields English
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

This piece of research endeavours to highlight the inevitability of the micro-textonymic transformations throughout the process of translation. The claim that translation necessitates transformation has been ascertained through rendering a few non/conventional micro-textonymic English collocational patterns into Arabic. However, though some translation theorists comprehend transformations as a remark of inescapable weakness, others maintain its prominence in successfully communicating the TL recipients, to the extent that there is no transation without transformation. Translator's skilfulness and expertise would closely monitor and manage such micro-textonymic transformations, being the decoder of the ST and re-encoder of the TT. Faithfulness in translation has been defined not in relation to extremely possible literalism and adherence to the ST, rather, it stands as a remark of how far do such micro-textonymic transformations help translators communicate the rhetoric of the ST, and guarantee acceptance and readability in the TL language and culture.

References used
Aixela, Javier Franco. "Culture-Specific Items in Translation". In Translation, Power, Subversion. Edited by: Roman Alvarez and M. Carmen-Africa Vidal. Clevedon, Philadelphia. Adelaide: Multilingual Matters LTD. 1996. pp. 52-78
Baalbaki, Munir and Baalbaki, Rohi. Al-Mawrid English-Arabic Arabic-English. Beirut: Dar El-Ilm Lilmalayen. 1998
Baker, Mona. In Other Words A Coursebook on Translation. London and New York: Routledge. 1992-2001
rate research

Read More

Paraphrase generation is an important task in natural language processing. Previous works focus on sentence-level paraphrase generation, while ignoring document-level paraphrase generation, which is a more challenging and valuable task. In this paper , we explore the task of document-level paraphrase generation for the first time and focus on the inter-sentence diversity by considering sentence rewriting and reordering. We propose CoRPG (Coherence Relationship guided Paraphrase Generation), which leverages graph GRU to encode the coherence relationship graph and get the coherence-aware representation for each sentence, which can be used for re-arranging the multiple (possibly modified) input sentences. We create a pseudo document-level paraphrase dataset for training CoRPG. Automatic evaluation results show CoRPG outperforms several strong baseline models on the BERTScore and diversity scores. Human evaluation also shows our model can generate document paraphrase with more diversity and semantic preservation.
Paraphrase identification (PI), a fundamental task in natural language processing, is to identify whether two sentences express the same or similar meaning, which is a binary classification problem. Recently, BERT-like pre-trained language models hav e been a popular choice for the frameworks of various PI models, but almost all existing methods consider general domain text. When these approaches are applied to a specific domain, existing models cannot make accurate predictions due to the lack of professional knowledge. In light of this challenge, we propose a novel framework, namely , which can leverage the external unstructured Wikipedia knowledge to accurately identify paraphrases. We propose to mine outline knowledge of concepts related to given sentences from Wikipedia via BM25 model. After retrieving related outline knowledge, makes predictions based on both the semantic information of two sentences and the outline knowledge. Besides, we propose a gating mechanism to aggregate the semantic information-based prediction and the knowledge-based prediction. Extensive experiments are conducted on two public datasets: PARADE (a computer science domain dataset) and clinicalSTS2019 (a biomedical domain dataset). The results show that the proposed outperforms state-of-the-art methods.
A nominalization uses a deverbal noun to describe an event associated with its underlying verb. Commonly found in academic and formal texts, nominalizations can be difficult to interpret because of ambiguous semantic relations between the deverbal no un and its arguments. Our goal is to interpret nominalizations by generating clausal paraphrases. We address compound nominalizations with both nominal and adjectival modifiers, as well as prepositional phrases. In evaluations on a number of unsupervised methods, we obtained the strongest performance by using a pre-trained contextualized language model to re-rank paraphrase candidates identified by a textual entailment model.
This paper focuses on paraphrase generation,which is a widely studied natural language generation task in NLP. With the development of neural models, paraphrase generation research has exhibited a gradual shift to neural methods in the recent years. This has provided architectures for contextualized representation of an input text and generating fluent, diverseand human-like paraphrases. This paper surveys various approaches to paraphrase generation with a main focus on neural methods.
Transformer-based models have gained increasing popularity achieving state-of-the-art performance in many research fields including speech translation. However, Transformer's quadratic complexity with respect to the input sequence length prevents its adoption as is with audio signals, which are typically represented by long sequences. Current solutions resort to an initial sub-optimal compression based on a fixed sampling of raw audio features. Therefore, potentially useful linguistic information is not accessible to higher-level layers in the architecture. To solve this issue, we propose Speechformer, an architecture that, thanks to reduced memory usage in the attention layers, avoids the initial lossy compression and aggregates information only at a higher level according to more informed linguistic criteria. Experiments on three language pairs (en→de/es/nl) show the efficacy of our solution, with gains of up to 0.8 BLEU on the standard MuST-C corpus and of up to 4.0 BLEU in a low resource scenario.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا