في الآونة الأخيرة، تقدم DEVENTANGLEMEMEMENEM بناء على شبكة خدرية توليدية أو AutoNCoder التباين بشكل كبير أداء التطبيقات المتنوعة في مجالات السيرة الذاتية و NLP.ومع ذلك، لا تزال هذه النماذج تعمل على مستويات خشنة في تحسين الخصائص ذات الصلة ارتباطا وثيقا، مثل بناء الجملة والدلالات باللغات البشرية.تقدم هذه الورقة نموذجا متحللا عميقا يستند إلى بناء جملة VAE ل DisentAnge و DeMantics باستخدام عقوبات الارتباط الكلية على اختلافات KL.والجدير بالذكر أننا نتحلل مدة الاختلاف KL من VAE الأصلي بحيث يمكن فصل المتغيرات الكامنة التي تم إنشاؤها بطريقة أكثر وضوحا وتفسيرا.تبين التجارب على مجموعات البيانات القياسية أن نموذجنا المقترح يمكن أن يحسن بشكل كبير من جودة الإعانات بين التمثيلات النحوية والدلية لمهام التشابه الدلالي ومهام التشابه النحوية.
Recently, disentanglement based on a generative adversarial network or a variational autoencoder has significantly advanced the performance of diverse applications in CV and NLP domains. Nevertheless, those models still work on coarse levels in the disentanglement of closely related properties, such as syntax and semantics in human languages. This paper introduces a deep decomposable model based on VAE to disentangle syntax and semantics by using total correlation penalties on KL divergences. Notably, we decompose the KL divergence term of the original VAE so that the generated latent variables can be separated in a more clear-cut and interpretable way. Experiments on benchmark datasets show that our proposed model can significantly improve the disentanglement quality between syntactic and semantic representations for semantic similarity tasks and syntactic similarity tasks.
References used
https://aclanthology.org/
Pre-trained language models have achieved huge success on a wide range of NLP tasks. However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive use
External syntactic and semantic information has been largely ignored by existing neural coreference resolution models. In this paper, we present a heterogeneous graph-based model to incorporate syntactic and semantic structures of sentences. The prop
Existing text style transfer (TST) methods rely on style classifiers to disentangle the text's content and style attributes for text style transfer. While the style classifier plays a critical role in existing TST methods, there is no known investiga
Sentence weighting is a simple and powerful domain adaptation technique. We carry out domain classification for computing sentence weights with 1) language model cross entropy difference 2) a convolutional neural network 3) a Recursive Neural Tensor
This research shows the concept of sentence syntax and the text
syntax and the difference between them, beside their respective
areas .It also tries to specify the obstacles which prevent the
progress of this kind of linguistic lesson in our Arabi