ترغب بنشر مسار تعليمي؟ اضغط هنا

Conceptual Spaces for Cognitive Architectures: A Lingua Franca for Different Levels of Representation

59   0   0.0 ( 0 )
 نشر من قبل Antonio Lieto
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [Laird (2012)]) adopt a classical symbolic approach, some (e.g. LEABRA [OReilly and Munakata (2000)]) are based on a purely connectionist model, while others (e.g. CLARION [Sun (2006)] adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning are also available [Kurup and Chandrasekaran (2007)]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter Gu007fardenfors [Gu007fardenfors (2000)] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by Gu007fardenfors [Gu007fardenfors (1997)] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one.



قيم البحث

اقرأ أيضاً

We propose a nonmonotonic Description Logic of typicality able to account for the phenomenon of concept combination of prototypical concepts. The proposed logic relies on the logic of typicality ALC TR, whose semantics is based on the notion of ratio nal closure, as well as on the distributed semantics of probabilistic Description Logics, and is equipped with a cognitive heuristic used by humans for concept composition. We first extend the logic of typicality ALC TR by typicality inclusions whose intuitive meaning is that there is probability p about the fact that typical Cs are Ds. As in the distributed semantics, we define different scenarios containing only some typicality inclusions, each one having a suitable probability. We then focus on those scenarios whose probabilities belong to a given and fixed range, and we exploit such scenarios in order to ascribe typical properties to a concept C obtained as the combination of two prototypical concepts. We also show that reasoning in the proposed Description Logic is EXPTIME-complete as for the underlying ALC.
379 - Antonio Lieto 2015
This article addresses an open problem in the area of cognitive systems and architectures: namely the problem of handling (in terms of processing and reasoning capabilities) complex knowledge structures that can be at least plausibly comparable, both in terms of size and of typology of the encoded information, to the knowledge that humans process daily for executing everyday activities. Handling a huge amount of knowledge, and selectively retrieve it ac- cording to the needs emerging in different situational scenarios, is an important aspect of human intelligence. For this task, in fact, humans adopt a wide range of heuristics (Gigerenzer and Todd) due to their bounded rationality (Simon, 1957). In this perspective, one of the re- quirements that should be considered for the design, the realization and the evaluation of intelligent cognitively inspired systems should be represented by their ability of heuristically identify and retrieve, from the general knowledge stored in their artificial Long Term Memory (LTM), that one which is synthetically and contextually relevant. This require- ment, however, is often neglected. Currently, artificial cognitive systems and architectures are not able, de facto, to deal with complex knowledge structures that can be even slightly comparable to the knowledge heuris- tically managed by humans. In this paper I will argue that this is not only a technological problem but also an epistemological one and I will briefly sketch a proposal for a possible solution.
The accumulation of adaptations in an open-ended manner during lifetime learning is a holy grail in reinforcement learning, intrinsic motivation, artificial curiosity, and developmental robotics. We present a specification for a cognitive architectur e that is capable of specifying an unlimited range of behaviors. We then give examples of how it can stochastically explore an interesting space of adjacent possible behaviors. There are two main novelties; the first is a proper definition of the fitness of self-generated games such that interesting games are expected to evolve. The second is a modular and evolvable behavior language that has systematicity, productivity, and compositionality, i.e. it is a physical symbol system. A part of the architecture has already been implemented on a humanoid robot.
Conceptual spaces are geometric representations of conceptual knowledge, in which entities correspond to points, natural properties correspond to convex regions, and the dimensions of the space correspond to salient features. While conceptual spaces enable elegant models of various cognitive phenomena, the lack of automated methods for constructing such representations have so far limited their application in artificial intelligence. To address this issue, we propose a method which learns a vector-space embedding of entities from Wikipedia and constrains this embedding such that entities of the same semantic type are located in some lower-dimensional subspace. We experimentally demonstrate the usefulness of these subspaces as (approximate) conceptual space representations by showing, among others, that important features can be modelled as directions and that natural properties tend to correspond to convex regions.
Computational context understanding refers to an agents ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artifici al intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا