تهدف محاذاة الكيان المتبادل (EA) إلى إيجاد الكيانات المكافئة بين Crosslingual KGS (الرسوم البيانية المعرفة)، وهي خطوة حاسمة لإدماج KGS.في الآونة الأخيرة، يتم اقتراح العديد من طرق EA القائمة على GNN وإظهار تحسينات الأداء اللائق على العديد من مجموعات البيانات العامة.ومع ذلك، فإن طرق EA القائمة القائمة على GNN ترثت حتما بشكل حتمة الترجمة الشفوية والكفاءة المنخفضة من الشبكات العصبية.تحفزه افتراض ISOMORPHIC من الأساليب القائمة على GNN، ونحن نجح في تحويل مشكلة EA عبر اللغات في مشكلة مهمة.بناء على هذا التعريف، نقترح طريقة محاذاة كيان بسيطة ولكنها فعالة بشكل محبط (SEU) دون شبكات عصبية.أجريت تجارب واسعة لإظهار أن نهجنا المقترح غير المقترح حتى يدق طرق متقدمة تحت إشراف على جميع مجموعات البيانات العامة مع ارتفاع الكفاءة والتفسيرية والاستقرار.
Cross-lingual entity alignment (EA) aims to find the equivalent entities between crosslingual KGs (Knowledge Graphs), which is a crucial step for integrating KGs. Recently, many GNN-based EA methods are proposed and show decent performance improvements on several public datasets. However, existing GNN-based EA methods inevitably inherit poor interpretability and low efficiency from neural networks. Motivated by the isomorphic assumption of GNN-based methods, we successfully transform the cross-lingual EA problem into an assignment problem. Based on this re-definition, we propose a frustratingly Simple but Effective Unsupervised entity alignment method (SEU) without neural networks. Extensive experiments have been conducted to show that our proposed unsupervised approach even beats advanced supervised methods across all public datasets while having high efficiency, interpretability, and stability.
References used
https://aclanthology.org/
Open-domain question answering has exploded in popularity recently due to the success of dense retrieval models, which have surpassed sparse models using only a few supervised training examples. However, in this paper, we demonstrate current dense mo
Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks
Unsupervised cross-domain dependency parsing is to accomplish domain adaptation for dependency parsing without using labeled data in target domain. Existing methods are often of the pseudo-annotation type, which generates data through self-annotation
The availability of corpora has led to significant advances in training semantic parsers in English. Unfortunately, for languages other than English, annotated data is limited and so is the performance of the developed parsers. Recently, pretrained m
Non-autoregressive neural machine translation (NART) models suffer from the multi-modality problem which causes translation inconsistency such as token repetition. Most recent approaches have attempted to solve this problem by implicitly modeling dep