ترغب بنشر مسار تعليمي؟ اضغط هنا

From Spatial Relations to Spatial Configurations

76   0   0.0 ( 0 )
 نشر من قبل Soham Dan
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Spatial Reasoning from language is essential for natural language understanding. Supporting it requires a representation scheme that can capture spatial phenomena encountered in language as well as in images and videos. Existing spatial representations are not sufficient for describing spatial configurations used in complex tasks. This paper extends the capabilities of existing spatial representation languages and increases coverage of the semantic aspects that are needed to ground the spatial meaning of natural language text in the world. Our spatial relation language is able to represent a large, comprehensive set of spatial concepts crucial for reasoning and is designed to support the composition of static and dynamic spatial configurations. We integrate this language with the Abstract Meaning Representation(AMR) annotation schema and present a corpus annotated by this extended AMR. To exhibit the applicability of our representation scheme, we annotate text taken from diverse datasets and show how we extend the capabilities of existing spatial representation languages with the fine-grained decomposition of semantics and blend it seamlessly with AMRs of sentences and discourse representations as a whole.



قيم البحث

اقرأ أيضاً

Natural language processing has made significant inroads into learning the semantics of words through distributional approaches, however representations learnt via these methods fail to capture certain kinds of information implicit in the real world. In particular, spatial relations are encoded in a way that is inconsistent with human spatial reasoning and lacking invariance to viewpoint changes. We present a system capable of capturing the semantics of spatial relations such as behind, left of, etc from natural language. Our key contributions are a novel multi-modal objective based on generating images of scenes from their textual descriptions, and a new dataset on which to train it. We demonstrate that internal representations are robust to meaning preserving transformations of descriptions (paraphrase invariance), while viewpoint invariance is an emergent property of the system.
We deal with the navigation problem where the agent follows natural language instructions while observing the environment. Focusing on language understanding, we show the importance of spatial semantics in grounding navigation instructions into visua l perceptions. We propose a neural agent that uses the elements of spatial configurations and investigate their influence on the navigation agents reasoning ability. Moreover, we model the sequential execution order and align visual objects with spatial configurations in the instruction. Our neural agent improves strong baselines on the seen environments and shows competitive performance on the unseen environments. Additionally, the experimental results demonstrate that explicit modeling of spatial semantic elements in the instructions can improve the grounding and spatial reasoning of the model.
133 - Soham Dan , Hangfeng He , Dan Roth 2020
Recognizing spatial relations and reasoning about them is essential in multiple applications including navigation, direction giving and human-computer interaction in general. Spatial relations between objects can either be explicit -- expressed as sp atial prepositions, or implicit -- expressed by spatial verbs such as moving, walking, shifting, etc. Both these, but implicit relations in particular, require significant common sense understanding. In this paper, we introduce the task of inferring implicit and explicit spatial relations between two entities in an image. We design a model that uses both textual and visual information to predict the spatial relations, making use of both positional and size information of objects and image embeddings. We contrast our spatial model with powerful language models and show how our modeling complements the power of these, improving prediction accuracy and coverage and facilitates dealing with unseen subjects, objects and relations.
This paper concerns the intersection of natural language and the physical space around us in which we live, that we observe and/or imagine things within. Many important features of language have spatial connotations, for example, many prepositions (l ike in, next to, after, on, etc.) are fundamentally spatial. Space is also a key factor of the meanings of many words/phrases/sentences/text, and space is a, if not the key, context for referencing (e.g. pointing) and embodiment. We propose a mechanism for how space and linguistic structure can be made to interact in a matching compositional fashion. Examples include Cartesian space, subway stations, chesspieces on a chess-board, and Penroses staircase. The starting point for our construction is the DisCoCat model of compositional natural language meaning, which we relax to accommodate physical space. We address the issue of having multiple agents/objects in a space, including the case that each agent has different capabilities with respect to that space, e.g., the specific moves each chesspiece can make, or the different velocities one may be able to reach. Once our model is in place, we show how inferences drawing from the structure of physical space can be made. We also how how linguistic model of space can interact with other such models related to our senses and/or embodiment, such as the conceptual spaces of colour, taste and smell, resulting in a rich compositional model of meaning that is close to human experience and embodiment in the world.
371 - Su-Heng Zhang , Lu Gao , Jun Xiong 2008
It is well known that direct observation of interference and diffraction pattern in the intensity distribution requires a spatially coherent source. Optical waves emitted from portions beyond the coherence area possess statistically independent phase s, and will degrade the interference pattern. In this paper we show an optical interference experiment, which seems contrary to our common knowledge, that the formation of the interference pattern is related to a spatially incoherent light source. Our experimental scheme is very similar to Gabors original proposal of holography[1], just with an incoherent source replacing the coherent one. In the statistical ensemble of the incoherent source, each sample field produces a sample interference pattern between object wave and reference wave. These patterns completely differ from each other due to the fluctuation of the source field distribution. Surprisingly, the sum of a great number of sample patterns exhibits explicitly an interference pattern, which contains all the information of the object and is equivalent to a hologram in the coherent light case. In this sense our approach would be valuable in holography and other interference techniques for the case where coherent source is unavailable, such as x-ray and electron sources.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا