ﻻ يوجد ملخص باللغة العربية
In this paper, we propose a sequential neural encoder with latent structured description (SNELSD) for modeling sentences. This model introduces latent chunk-level representations into conventional sequential neural encoders, i.e., recurrent neural networks (RNNs) with long short-term memory (LSTM) units, to consider the compositionality of languages in semantic modeling. An SNELSD model has a hierarchical structure that includes a detection layer and a description layer. The detection layer predicts the boundaries of latent word chunks in an input sentence and derives a chunk-level vector for each word. The description layer utilizes modified LSTM units to process these chunk-level vectors in a recurrent manner and produces sequential encoding outputs. These output vectors are further concatenated with word vectors or the outputs of a chain LSTM encoder to obtain the final sentence representation. All the model parameters are learned in an end-to-end manner without a dependency on additional text chunking or syntax parsing. A natural language inference (NLI) task and a sentiment analysis (SA) task are adopted to evaluate the performance of our proposed model. The experimental results demonstrate the effectiveness of the proposed SNELSD model on exploring task-dependent chunking patterns during the semantic modeling of sentences. Furthermore, the proposed method achieves better performance than conventional chain LSTMs and tree-structured LSTMs on both tasks.
Structured sentences are important expressions in human writings and dialogues. Previous works on neural text generation fused semantic and structural information by encoding the entire sentence into a mixed hidden representation. However, when a gen
Multimodal neural machine translation (NMT) has become an increasingly important area of research over the years because additional modalities, such as image data, can provide more context to textual data. Furthermore, the viability of training multi
Unsupervised structure learning in high-dimensional time series data has attracted a lot of research interests. For example, segmenting and labelling high dimensional time series can be helpful in behavior understanding and medical diagnosis. Recent
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the ent
The recurrent neural networks (RNN) with richly distributed internal states and flexible non-linear transition functions, have overtaken the dynamic Bayesian networks such as the hidden Markov models (HMMs) in the task of modeling highly structured s