تمثل قدرة تعلم التعلم من تمثيلات الإعجاب خطوة رئيسية لأنظمة NLP القابلة للتفسير حيث تتيح السيطرة على الميزات اللغوية الكامنة.تعتمد معظم الأساليب التي يتعرض لها DEVENTANGLEMELLEMES على المتغيرات المستمرة، سواء بالنسبة للصور والنص.نقول أنه على الرغم من أن تكون مناسبا لمجموعات بيانات الصورة، قد لا تكون المتغيرات المستمرة مثالية لميزات نموذجية للبيانات النصية، بسبب حقيقة أن معظم العوامل الإدارية في النص منفصلة منفصلة.نقترح طريقة استنادا عن السيارات التلقائية التي تتميز بها النماذج بمثابة متغيرات منفصلة وتشجع الاستقلال بين المتغيرات لتعلم تمثيلات الإعانات.يتفوق النموذج المقترح على خطوط أساسية مستمرة ومنفصلة حول العديد من المعايير النوعية والكمية لإجراءات DEVENTANGELES وكذلك على تطبيق Text Style Toystream.
The ability of learning disentangled representations represents a major step for interpretable NLP systems as it allows latent linguistic features to be controlled. Most approaches to disentanglement rely on continuous variables, both for images and text. We argue that despite being suitable for image datasets, continuous variables may not be ideal to model features of textual data, due to the fact that most generative factors in text are discrete. We propose a Variational Autoencoder based method which models language features as discrete variables and encourages independence between variables for learning disentangled representations. The proposed model outperforms continuous and discrete baselines on several qualitative and quantitative benchmarks for disentanglement as well as on a text style transfer downstream application.
References used
https://aclanthology.org/
Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder. Because posterior collapse is known to be exacerbated by expressive decoders, Transformers ha
Given an untrimmed video and a natural language query, Natural Language Video Localization (NLVL) aims to identify the video moment described by query. To address this task, existing methods can be roughly grouped into two groups: 1) propose-and-rank
Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI). And at this time, symbolic approaches to NLI are receiving less attention. Both approaches (symbolic and DL) have their adva
Variational autoencoders have been studied as a promising approach to model one-to-many mappings from context to response in chat response generation. However, they often fail to learn proper mappings. One of the reasons for this failure is the discr
It has been long known that sparsity is an effective inductive bias for learning efficient representation of data in vectors with fixed dimensionality, and it has been explored in many areas of representation learning. Of particular interest to this