البناء التلقائي لقواعد المعرفة ذات الصلة (KBS) من النص، وتوليد نص مغزى من KBS هما أهداف طويلة الأمد في تعلم الآلات. في هذه الورقة، نقدم Regen، وهي جيل ثنائي الاتجاه من النص والرأس الرسم البياني الاستفادة من التعزيز لتعزيز الأداء. يتيح لنا الخطية الرسم البياني إعادة تأكيد المهام كسلسلة لتسليم توليد التسلسل بغض النظر عن الاتجاه الإداري، والذي يسمح بدوره لاستخدام التعزيز التعزيز لتدريب التسلسل حيث يعمل النموذج نفسه كناقد خاص به تدريب التسلسل (SCST). نقدم إجراء تحقيق واسع النطاق الذي يوضح أن استخدام RL عبر فوائد SCST Grape و جيل النص على Datasets Webnlg + 2020 و Tekgen. يوفر نظامنا نتائج حديثة على Webnlg + 2020 من خلال تحسين النتائج المنشورة بشكل كبير من تحدي Webnlg 2020+ لكل من مهام جيل الرسائل النصية إلى الرسم البيانية والرسوم البيانية. مزيد من التفاصيل في https://github.com/ibm/regen.
Automatic construction of relevant Knowledge Bases (KBs) from text, and generation of semantically meaningful text from KBs are both long-standing goals in Machine Learning. In this paper, we present ReGen, a bidirectional generation of text and graph leveraging Reinforcement Learning to improve performance. Graph linearization enables us to re-frame both tasks as a sequence to sequence generation problem regardless of the generative direction, which in turn allows the use of Reinforcement Learning for sequence training where the model itself is employed as its own critic leading to Self-Critical Sequence Training (SCST). We present an extensive investigation demonstrating that the use of RL via SCST benefits graph and text generation on WebNLG+ 2020 and TekGen datasets. Our system provides state-of-the-art results on WebNLG+ 2020 by significantly improving upon published results from the WebNLG 2020+ Challenge for both text-to-graph and graph-to-text generation tasks. More details at https://github.com/IBM/regen.
References used
https://aclanthology.org/
Large-scale language models (LMs) pretrained on massive corpora of text, such as GPT-2, are powerful open-domain text generators. However, as our systematic examination reveals, it is still challenging for such models to generate coherent long passag
Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks. In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level
It is challenging to design profitable and practical trading strategies, as stock price movements are highly stochastic, and the market is heavily influenced by chaotic data across sources like news and social media. Existing NLP approaches largely t
Abstract Consistency of a model---that is, the invariance of its behavior under meaning-preserving alternations in its input---is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms