تقدم هذه الورقة نهجا جديدا للتعلم بزيادة المعنى بصريا تمثيل الكلمات باعتبارها تضمين عقدة منخفضة الأبعاد في التسلسل الهرمي في الرسم البياني الأساسي.المستوى الأدنى من عروض الكلمات التسلسل الهرمية الخاصة بالكلمة، مشروط إلى طريقة أخرى، من خلال الرسوم البيانية المخصصة ولكن التواصل، في حين أن المستوى الأعلى يضع هذه التمثيلات معا على رسم بياني واحد لمعرفة التمثيل بالاشتراك من كل من الطرائق.طوبولوجيا كل طرازات الرسم البياني علاقات التشابه بين الكلمات، ويقدر بالاشتراك مع تضمين الرسم البياني.الافتراض الأساسي هذا النموذج هو أن الكلمات تقاسم معنى مماثل تتوافق مع المجتمعات في الرسم البياني الأساسي في مساحة منخفضة الأبعاد.لقد سمحنا على هذا النموذج التسلسل الهرمي تشابه الرسم البياني متعدد الوسائط (HM-SGE).تحقق النتائج التجريبية من صحة قدرة HM-SGE لمحاكاة أحكام التشابه البشري وتصنيف المفهوم، مما يتفوق على حالة الفن.
This paper introduces a novel approach to learn visually grounded meaning representations of words as low-dimensional node embeddings on an underlying graph hierarchy. The lower level of the hierarchy models modality-specific word representations, conditioned to another modality, through dedicated but communicating graphs, while the higher level puts these representations together on a single graph to learn a representation jointly from both modalities. The topology of each graph models similarity relations among words, and is estimated jointly with the graph embedding. The assumption underlying this model is that words sharing similar meaning correspond to communities in an underlying graph in a low-dimensional space. We named this model Hierarchical Multi-Modal Similarity Graph Embedding (HM-SGE). Experimental results validate the ability of HM-SGE to simulate human similarity judgments and concept categorization, outperforming the state of the art.
References used
https://aclanthology.org/
This paper introduces the system description of the hub team, which explains the related work and experimental results of our team's participation in SemEval 2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC). The da
The design of expressive representations of entities and relations in a knowledge graph is an important endeavor. While many of the existing approaches have primarily focused on learning from relational patterns and structural information, the intrin
Due to its great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graph techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems. There has seen a surge of
Word meaning is notoriously difficult to capture, both synchronically and diachronically. In this paper, we describe the creation of the largest resource of graded contextualized, diachronic word meaning annotation in four different languages, based
Most recent studies for relation extraction (RE) leverage the dependency tree of the input sentence to incorporate syntax-driven contextual information to improve model performance, with little attention paid to the limitation where high-quality depe