ترغب بنشر مسار تعليمي؟ اضغط هنا

Semantics of Complex Sentences in Japanese

221   0   0.0 ( 0 )
 نشر من قبل Hiroshi Nakagawa
 تاريخ النشر 1994
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The important part of semantics of complex sentence is captured as relations among semantic roles in subordinate and main clause respectively. However if there can be relations between every pair of semantic roles, the amount of computation to identify the relations that hold in the given sentence is extremely large. In this paper, for semantics of Japanese complex sentence, we introduce new pragmatic roles called `observer and `motivated respectively to bridge semantic roles of subordinate and those of main clauses. By these new roles constraints on the relations among semantic/pragmatic roles are known to be almost local within subordinate or main clause. In other words, as for the semantics of the whole complex sentence, the only role we should deal with is a motivated.



قيم البحث

اقرأ أيضاً

Classical scope-assignment strategies for multi-quantifier sentences involve quantifier phrase (QP)-movement. More recent continuation-based approaches provide a compelling alternative, for they interpret QPs in situ - without resorting to Logical Fo rms or any structures beyond the overt syntax. The continuation-based strategies can be divided into two groups: those that locate the source of scope-ambiguity in the rules of semantic composition and those that attribute it to the lexical entries for the quantifier words. In this paper, we focus on the former operation-based approaches and the nature of the semantic operations involved. More specifically, we discuss three such possible operation-based strategies for multi-quantifier sentences, together with their relative merits and costs.
Multimodal neural machine translation (NMT) has become an increasingly important area of research over the years because additional modalities, such as image data, can provide more context to textual data. Furthermore, the viability of training multi modal NMT models without a large parallel corpus continues to be investigated due to low availability of parallel sentences with images, particularly for English-Japanese data. However, this void can be filled with comparable sentences that contain bilingual terms and parallel phrases, which are naturally created through media such as social network posts and e-commerce product descriptions. In this paper, we propose a new multimodal English-Japanese corpus with comparable sentences that are compiled from existing image captioning datasets. In addition, we supplement our comparable sentences with a smaller parallel corpus for validation and test purposes. To test the performance of this comparable sentence translation scenario, we train several baseline NMT models with our comparable corpus and evaluate their English-Japanese translation performance. Due to low translation scores in our baseline experiments, we believe that current multimodal NMT models are not designed to effectively utilize comparable sentence data. Despite this, we hope for our corpus to be used to further research into multimodal NMT with comparable sentences.
We propose spoken sentence embeddings which capture both acoustic and linguistic content. While existing works operate at the character, phoneme, or word level, our method learns long-term dependencies by modeling speech at the sentence level. Formul ated as an audio-linguistic multitask learning problem, our encoder-decoder model simultaneously reconstructs acoustic and natural language features from audio. Our results show that spoken sentence embeddings outperform phoneme and word-level baselines on speech recognition and emotion recognition tasks. Ablation studies show that our embeddings can better model high-level acoustic concepts while retaining linguistic content. Overall, our work illustrates the viability of generic, multi-modal sentence embeddings for spoken language understanding.
In this paper, we develop a compositional vector-based semantics of positive transitive sentences in quantum natural language processing for a non-English language, i.e. Persian, to compare the parametrized quantum circuits of two synonymous sentence s in two languages, English and Persian. By considering grammar+meaning of a transitive sentence, we translate DisCoCat diagram via ZX-calculus into quantum circuit form. Also, we use a bigraph method to rewrite DisCoCat diagram and turn into quantum circuit in the semantic side.
Atomic clauses are fundamental text units for understanding complex sentences. Identifying the atomic sentences within complex sentences is important for applications such as summarization, argument mining, discourse analysis, discourse parsing, and question answering. Previous work mainly relies on rule-based methods dependent on parsing. We propose a new task to decompose each complex sentence into simple sentences derived from the tensed clauses in the source, and a novel problem formulation as a graph edit task. Our neural model learns to Accept, Break, Copy or Drop elements of a graph that combines word adjacency and grammatical dependencies. The full processing pipeline includes modules for graph construction, graph editing, and sentence generation from the output graph. We introduce DeSSE, a new dataset designed to train and evaluate complex sentence decomposition, and MinWiki, a subset of MinWikiSplit. ABCD achieves comparable performance as two parsing baselines on MinWiki. On DeSSE, which has a more even balance of complex sentence types, our model achieves higher accuracy on the number of atomic sentences than an encoder-decoder baseline. Results include a detailed error analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا