ترغب بنشر مسار تعليمي؟ اضغط هنا

Some Epistemological Problems with the Knowledge Level in Cognitive Architectures

380   0   0.0 ( 0 )
 نشر من قبل Antonio Lieto
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Antonio Lieto




اسأل ChatGPT حول البحث

This article addresses an open problem in the area of cognitive systems and architectures: namely the problem of handling (in terms of processing and reasoning capabilities) complex knowledge structures that can be at least plausibly comparable, both in terms of size and of typology of the encoded information, to the knowledge that humans process daily for executing everyday activities. Handling a huge amount of knowledge, and selectively retrieve it ac- cording to the needs emerging in different situational scenarios, is an important aspect of human intelligence. For this task, in fact, humans adopt a wide range of heuristics (Gigerenzer and Todd) due to their bounded rationality (Simon, 1957). In this perspective, one of the re- quirements that should be considered for the design, the realization and the evaluation of intelligent cognitively inspired systems should be represented by their ability of heuristically identify and retrieve, from the general knowledge stored in their artificial Long Term Memory (LTM), that one which is synthetically and contextually relevant. This require- ment, however, is often neglected. Currently, artificial cognitive systems and architectures are not able, de facto, to deal with complex knowledge structures that can be even slightly comparable to the knowledge heuris- tically managed by humans. In this paper I will argue that this is not only a technological problem but also an epistemological one and I will briefly sketch a proposal for a possible solution.

قيم البحث

اقرأ أيضاً

During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [Laird (2012)]) adopt a classical symbolic a pproach, some (e.g. LEABRA [OReilly and Munakata (2000)]) are based on a purely connectionist model, while others (e.g. CLARION [Sun (2006)] adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning are also available [Kurup and Chandrasekaran (2007)]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter Gu007fardenfors [Gu007fardenfors (2000)] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by Gu007fardenfors [Gu007fardenfors (1997)] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one.
This work summarizes part of current knowledge on High-level Cognitive process and its relation with biological hardware. Thus, it is possible to identify some paradoxes which could impact the development of future technologies and artificial intelli gence: we may make a High-level Cognitive Machine, sacrificing the principal attribute of a machine, its accuracy.
The graph of a Bayesian Network (BN) can be machine learned, determined by causal knowledge, or a combination of both. In disciplines like bioinformatics, applying BN structure learning algorithms can reveal new insights that would otherwise remain u nknown. However, these algorithms are less effective when the input data are limited in terms of sample size, which is often the case when working with real data. This paper focuses on purely machine learned and purely knowledge-based BNs and investigates their differences in terms of graphical structure and how well the implied statistical models explain the data. The tests are based on four previous case studies whose BN structure was determined by domain knowledge. Using various metrics, we compare the knowledge-based graphs to the machine learned graphs generated from various algorithms implemented in TETRAD spanning all three classes of learning. The results show that, while the algorithms produce graphs with much higher model selection score, the knowledge-based graphs are more accurate predictors of variables of interest. Maximising score fitting is ineffective in the presence of limited sample size because the fitting becomes increasingly distorted with limited data, guiding algorithms towards graphical patterns that share higher fitting scores and yet deviate considerably from the true graph. This highlights the value of causal knowledge in these cases, as well as the need for more appropriate fitting scores suitable for limited data. Lastly, the experiments also provide new evidence that support the notion that results from simulated data tell us little about actual real-world performance.
Multitasking optimization is a recently introduced paradigm, focused on the simultaneous solving of multiple optimization problem instances (tasks). The goal of multitasking environments is to dynamically exploit existing complementarities and synerg ies among tasks, helping each other through the transfer of genetic material. More concretely, Evolutionary Multitasking (EM) regards to the resolution of multitasking scenarios using concepts inherited from Evolutionary Computation. EM approaches such as the well-known Multifactorial Evolutionary Algorithm (MFEA) are lately gaining a notable research momentum when facing with multiple optimization problems. This work is focused on the application of the recently proposed Multifactorial Cellular Genetic Algorithm (MFCGA) to the well-known Capacitated Vehicle Routing Problem (CVRP). In overall, 11 different multitasking setups have been built using 12 datasets. The contribution of this research is twofold. On the one hand, it is the first application of the MFCGA to the Vehicle Routing Problem family of problems. On the other hand, equally interesting is the second contribution, which is focused on the quantitative analysis of the positive genetic transferability among the problem instances. To do that, we provide an empirical demonstration of the synergies arisen between the different optimization tasks.
Modularity is a central principle throughout the design process for cyber-physical systems. Modularity reduces complexity and increases reuse of behavior. In this paper we pose and answer the following question: how can we identify independent `modul es within the structure of reactive control architectures? To this end, we propose a graph-structured control architecture we call a decision structure, and show how it generalises some reactive control architectures which are popular in Artificial Intelligence (AI) and robotics, specifically Teleo-Reactive programs (TRs), Decision Trees (DTs), Behavior Trees (BTs) and Generalised Behavior Trees ($k$-BTs). Inspired by the definition of a module in graph theory, we define modules in decision structures and show how each decision structure possesses a canonical decomposition into its modules. We can naturally characterise each of the BTs, $k$-BTs, DTs and TRs by properties of their module decomposition. This allows us to recognise which decision structures are equivalent to each of these architectures in quadratic time. Our proposed concept of modules extends to formal verification, under any verification scheme capable of verifying a decision structure. Namely, we prove that a modification to a module within a decision structure has no greater flow-on effects than a modification to an individual action within that structure. This enables verification on modules to be done locally and hierarchically, where structures can be verified and then repeatedly locally modified, with modules replaced by modules while preserving correctness. To illustrate the findings, we present an example of a solar-powered drone controlled by a decision structure. We use a Linear Temporal Logic-based verification scheme to verify the correctness of this structure, and then show how one can modify modules while preserving its correctness.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا