لقد أظهرت الأدوات الحديثة الأخيرة أن نماذج تعلم الرسم البياني المعرفي (KG) عرضة للغاية للهجمات الخصومة.ومع ذلك، لا تزال هناك ندرة من تحليلات الضعف لمحاذاة الكيان المتبادلة تحت هجمات الخصومة.تقترح هذه الورقة نموذج هجوم مخدر مع تقنيات هجومين جديدة لإشراض هيكل كجم وتدلل جودة محاذاة الكيان المتبادلة العميقة.أولا، يتم استخدام طريقة تعظيم كثافة الكيان لإخفاء الكيانات المهاجمة في المناطق الكثيفة في كلكتين، بحيث تكون الاضطرابات المشتقة غير ملحوظة.ثانيا، تم تطوير طريقة تضخيم إشارة الهجوم لتقليل مشاكل التلاشي التجريبية في عملية الهجمات الخصومة لمزيد من تحسين فعالية الهجوم.
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses of cross-lingual entity alignment under adversarial attacks. This paper proposes an adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment. First, an entity density maximization method is employed to hide the attacked entities in dense regions in two KGs, such that the derived perturbations are unnoticeable. Second, an attack signal amplification method is developed to reduce the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness.
References used
https://aclanthology.org/
Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages. To be widely applicable, we focus on its zero-shot transfer setting. That is, we can only access train
In cross-lingual text classification, it is required that task-specific training data in high-resource source languages are available, where the task is identical to that of a low-resource target language. However, collecting such training data can b
Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models
Relations in most of the traditional knowledge graphs (KGs) only reflect static and factual connections, but fail to represent the dynamic activities and state changes about entities. In this paper, we emphasize the importance of incorporating events
Deep neural networks are vulnerable to adversarial attacks, where a small perturbation to an input alters the model prediction. In many cases, malicious inputs intentionally crafted for one model can fool another model. In this paper, we present the