أدت النجاحات الأخيرة في النمذجة التوليدية العميقة إلى تقدم كبير في توليد اللغة الطبيعية (NLG).أظهرت دمج الكيانات في نماذج الجيل العصبي تحسينات كبيرة من خلال المساعدة في استنتاج الموضوع الموجز وإنشاء محتوى متماسك.لتعزيز دور الكيان في NLG، في هذه الورقة، نهدف إلى نموذج نوع الكيان في مرحلة فك التشفير لتوليد كلمات سياقية بدقة.نقوم بتطوير نموذج NLG الجديد لإنتاج تسلسل مستهدف بناء على قائمة معينة من الكيانات.يحتوي نموذجنا على وحدة فك ترميز متعددة الخطوات التي تحفز أنواع الكيان في عملية تذكر الجيل.تجارب عملاء أخبار عامين تظهر حقن النوع ينفذ أفضل من نوع خطوط خطوط أسلاف التضمين.
Recent successes in deep generative modeling have led to significant advances in natural language generation (NLG). Incorporating entities into neural generation models has demonstrated great improvements by assisting to infer the summary topic and to generate coherent content. To enhance the role of entity in NLG, in this paper, we aim to model the entity type in the decoding phase to generate contextual words accurately. We develop a novel NLG model to produce a target sequence based on a given list of entities. Our model has a multi-step decoder that injects the entity types into the process of entity mention generation. Experiments on two public news datasets demonstrate type injection performs better than existing type embedding concatenation baselines.
References used
https://aclanthology.org/
While powerful pre-trained language models have improved the fluency of text generation models, semantic adequacy -the ability to generate text that is semantically faithful to the input- remains an unsolved issue. In this paper, we introduce a novel
Material science synthesis procedures are a promising domain for scientific NLP, as proper modeling of these recipes could provide insight into new ways of creating materials. However, a fundamental challenge in building information extraction models
Named entity disambiguation (NED), which involves mapping textual mentions to structured entities, is particularly challenging in the medical domain due to the presence of rare entities. Existing approaches are limited by the presence of coarse-grain
Relation prediction informed from a combination of text corpora and curated knowledge bases, combining knowledge graph completion with relation extraction, is a relatively little studied task. A system that can perform this task has the ability to ex
Abstract We study learning named entity recognizers in the presence of missing entity annotations. We approach this setting as tagging with latent variables and propose a novel loss, the Expected Entity Ratio, to learn models in the presence of syste