نحن نقدم Graformer، وهي عبارة عن بنية ترميز ترميز ترميز محول المبالية على أساس الرسوم البيانية إلى النص.مع انتباهنا عن الرسوم البيانية لروايتنا، يعتمد ترميز العقدة على جميع العقد في الرسم البياني للإدخال - ليس فقط الجيران المباشر - يسهل اكتشاف أنماط عالمية.نحن نمثل العلاقة بين العقدتين كطابع أقصر المسار بينهما.يتعلم Graformer الوزن هذه العلاقات العقدة العقدة بشكل مختلف عن رؤوس اهتمام مختلفة، وبالتالي تعلم وجهات نظر متصلة بشكل مختلف عن الرسم البياني للإدخال.نقوم بتقييم GRAFORMER على اثنين من المعايير الشهيرة في الرسم البياني إلى النص، وجدول الأعمال و Webnlg، حيث يحقق أداء قوي أثناء استخدام العديد من المعلمات أقل من الأساليب الأخرى.
We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.
References used
https://aclanthology.org/
Abstractive conversation summarization has received much attention recently. However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-huma
The design of expressive representations of entities and relations in a knowledge graph is an important endeavor. While many of the existing approaches have primarily focused on learning from relational patterns and structural information, the intrin
Knowledge graphs (KGs) are widely used to store and access information about entities and their relationships. Given a query, the task of entity retrieval from a KG aims at presenting a ranked list of entities relevant to the query. Lately, an increa
Pre-trained models like Bidirectional Encoder Representations from Transformers (BERT), have recently made a big leap forward in Natural Language Processing (NLP) tasks. However, there are still some shortcomings in the Masked Language Modeling (MLM)
Most of the existing Knowledge-based Question Answering (KBQA) methods first learn to map the given question to a query graph, and then convert the graph to an executable query to find the answer. The query graph is typically expanded progressively f