في هذه الورقة، فإننا نطبق غير المدعومة غير المدعومة باعتبارها مهمة جديدة في تحريض الهيكل النحوي، والتي مفيدة لفهم الهياكل اللغوية للغات البشرية وكذلك معالجة لغات الموارد المنخفضة.نقترح اتباع نهج نقل المعرفة بأنه يسخر بشكل مسبق تسميات القطعة من نماذج التحليل غير المنصوص عليها في الحديث؛يتعلم الشبكة العصبية التسلسلية المتكررة (HRNN) من هذه الملصقات المستحثة من الفرق لتسليم ضجيج الاستدلال.تبين التجارب أن نهجنا يجسد إلى حد كبير الفجوة بين الكملات الخاضعة للإشراف وغير المدعوم.
In this paper, we address unsupervised chunking as a new task of syntactic structure induction, which is helpful for understanding the linguistic structures of human languages as well as processing low-resource languages. We propose a knowledge-transfer approach that heuristically induces chunk labels from state-of-the-art unsupervised parsing models; a hierarchical recurrent neural network (HRNN) learns from such induced chunk labels to smooth out the noise of the heuristics. Experiments show that our approach largely bridges the gap between supervised and unsupervised chunking.
References used
https://aclanthology.org/
The quality and quantity of parallel sentences are known as very important training data for constructing neural machine translation (NMT) systems. However, these resources are not available for many low-resource language pairs. Many existing methods
User commenting is a valuable feature of many news outlets, enabling them a contact with readers and enabling readers to express their opinion, provide different viewpoints, and even complementary information. Yet, large volumes of user comments are
The style transfer task (here style is used in a broad authorial'' sense with many aspects including register, sentence structure, and vocabulary choice) takes text input and rewrites it in a specified target style preserving the meaning, but alterin
Unsupervised relation extraction works by clustering entity pairs that have the same relations in the text. Some existing variational autoencoder (VAE)-based approaches train the relation extraction model as an encoder that generates relation classif
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled d