قدم النيكل و Kiela (2017) طريقة جديدة لتضمين عقد الأشجار في كرة الخشب، وتشير إلى أن هذه المدينات القطعي هي أكثر فعالية بكثير من Auclidean Admings في الرسوم البيانية الكبيرة الهيكلية بشكل كبير، مثل WordNet Nouns Trees Hypernymy Tree.هذا صحيح بشكل خاص في الأبعاد المنخفضة (النيكل وجيلا، 2017، الجدول 1).في هذا العمل، نسعى لإعادة إنتاج تجاربهم على تضمين وإعادة بناء الرسوم البيانية Hypernymy Nouns.عداد إلى ما تقاريره، نجد أن Auclidean Abbeddings قادرة على تمثيل هذه الشجرة على الأقل بالإضافة إلى تضمين المخيفات، عندما سمحت ب 50 أبعاد على الأقل.نلاحظ أن هذا لا يقلل من أهمية عملهم بالنظر إلى الأداء المثير للإعجاب من التضامن القطعي في إعدادات منخفضة الأبعاد للغاية.ومع ذلك، بالنظر إلى التأثير الواسع لعملهم، فإن هدفنا هنا هو تقديم مقارنة محدثة وأكثر دقة بين Euclidean و SuperBolic Elegbeddings.
Nickel and Kiela (2017) present a new method for embedding tree nodes in the Poincare ball, and suggest that these hyperbolic embeddings are far more effective than Euclidean embeddings at embedding nodes in large, hierarchically structured graphs like the WordNet nouns hypernymy tree. This is especially true in low dimensions (Nickel and Kiela, 2017, Table 1). In this work, we seek to reproduce their experiments on embedding and reconstructing the WordNet nouns hypernymy graph. Counter to what they report, we find that Euclidean embeddings are able to represent this tree at least as well as Poincare embeddings, when allowed at least 50 dimensions. We note that this does not diminish the significance of their work given the impressive performance of hyperbolic embeddings in very low-dimensional settings. However, given the wide influence of their work, our aim here is to present an updated and more accurate comparison between the Euclidean and hyperbolic embeddings.
References used
https://aclanthology.org/
Recent knowledge graph embedding (KGE) models based on hyperbolic geometry have shown great potential in a low-dimensional embedding space. However, the necessity of hyperbolic space in KGE is still questionable, because the calculation based on hype
Much recent work in bilingual lexicon induction (BLI) views word embeddings as vectors in Euclidean space. As such, BLI is typically solved by finding a linear transformation that maps embeddings to a common space. Alternatively, word embeddings may
Representation learning approaches for knowledge graphs have been mostly designed for static data. However, many knowledge graphs involve evolving data, e.g., the fact (The President of the United States is Barack Obama) is valid only from 2009 to 20
This paper describes our submission for the LongSumm task in SDP 2021. We propose a method for incorporating sentence embeddings produced by deep language models into extractive summarization techniques based on graph centrality in an unsupervised ma
Abstract Several metrics have been proposed for assessing the similarity of (abstract) meaning representations (AMRs), but little is known about how they relate to human similarity ratings. Moreover, the current metrics have complementary strengths a