تم استكشاف Adment Graph Graph (KIGS) بشكل مكثف في السنوات الأخيرة بسبب وعدهم بمجموعة واسعة من التطبيقات. ومع ذلك، تركز الدراسات الحالية على تحسين أداء النموذج النهائي دون الاعتراف بالتكلفة الحسابية للنهج المقترحة، من حيث التنفيذ والتأثير البيئي. تقترح هذه الورقة إطار KGE بسيط ولكنه فعال يمكن أن يقلل من وقت التدريب وبصمة الكربون عن طريق أوامر من الأقواس مقارنة مع النهج الحديثة، مع إنتاج أداء تنافسي. نسلط الضوء على ثلاثة ابتكارات تقنية: التعلم الدفاعي الكامل عبر المصفوفات العلائقية، وتحليل العوامة المتعامدة المغلقة للملابس، والتدريب غير السلبي للأخذ العينات. بالإضافة إلى ذلك، كأول طريقة KGE الأولى التي تخزنها تضمين كيانها أيضا معلومات العلاقة الكاملة، ترمز النماذج المدربة لدينا إلى دلالات غنية ويمكن تفسيرها للغاية. تجارب شاملة ودراسات الاجتثاثات التي تنطوي على 13 خطوط بيانات قوية ومجموعات بيانات قياسية تحقق من فعالية وكفاءة خوارزميةنا.
Knowledge Graph Embeddings (KGEs) have been intensively explored in recent years due to their promise for a wide range of applications. However, existing studies focus on improving the final model performance without acknowledging the computational cost of the proposed approaches, in terms of execution time and environmental impact. This paper proposes a simple yet effective KGE framework which can reduce the training time and carbon footprint by orders of magnitudes compared with state-of-the-art approaches, while producing competitive performance. We highlight three technical innovations: full batch learning via relational matrices, closed-form Orthogonal Procrustes Analysis for KGEs, and non-negative-sampling training. In addition, as the first KGE method whose entity embeddings also store full relation information, our trained models encode rich semantics and are highly interpretable. Comprehensive experiments and ablation studies involving 13 strong baselines and two standard datasets verify the effectiveness and efficiency of our algorithm.
References used
https://aclanthology.org/
Recent progress in pretrained Transformer-based language models has shown great success in learning contextual representation of text. However, due to the quadratic self-attention complexity, most of the pretrained Transformers models can only handle
The design of expressive representations of entities and relations in a knowledge graph is an important endeavor. While many of the existing approaches have primarily focused on learning from relational patterns and structural information, the intrin
Interactive machine reading comprehension (iMRC) is machine comprehension tasks where knowledge sources are partially observable. An agent must interact with an environment sequentially to gather necessary knowledge in order to answer a question. We
Numeracy plays a key role in natural language understanding. However, existing NLP approaches, not only traditional word2vec approach or contextualized transformer-based language models, fail to learn numeracy. As the result, the performance of these
Knowledge Graphs (KGs) have become increasingly popular in the recent years. However, as knowledge constantly grows and changes, it is inevitable to extend existing KGs with entities that emerged or became relevant to the scope of the KG after its cr