No Arabic abstract
The human mind is still an unknown process of neuroscience in many aspects. Nevertheless, for decades the scientific community has proposed computational models that try to simulate their parts, specific applications, or their behavior in different situations. The most complete model in this line is undoubtedly the LIDA model, proposed by Stan Franklin with the aim of serving as a generic computational architecture for several applications. The present project is inspired by the LIDA model to apply it to the process of movie recommendation, the model called MIRA (Movie Intelligent Recommender Agent) presented percentages of precision similar to a traditional model when submitted to the same assay conditions. Moreover, the proposed model reinforced the precision indexes when submitted to tests with volunteers, proving once again its performance as a cognitive model, when executed with small data volumes. Considering that the proposed model achieved a similar behavior to the traditional models under conditions expected to be similar for natural systems, it can be said that MIRA reinforces the applicability of LIDA as a path to be followed for the study and generation of computational agents inspired by neural behaviors.
The paper proposes a novel cognitive architecture (CA) for computational creativity based on the Psi model and on the mechanisms inspired by dual process theories of reasoning and rationality. In recent years, many cognitive models have focused on dual process theories to better describe and implement complex cognitive skills in artificial agents, but creativity has been approached only at a descriptive level. In previous works we have described various modules of the cognitive architecture that allows a robot to execute creative paintings. By means of dual process theories we refine some relevant mechanisms to obtain artworks, and in particular we explain details about the resolution level of the CA dealing with different strategies of access to the Long Term Memory (LTM) and managing the interaction between S1 and S2 processes of the dual process theory. The creative process involves both divergent and convergent processes in either implicit or explicit manner. This leads to four activities (exploratory, reflective, tacit, and analytic) that, triggered by urges and motivations, generate creative acts. These creative acts exploit both the LTM and the WM in order to make novel substitutions to a perceived image by properly mixing parts of pictures coming from different domains. The paper highlights the role of the interaction between S1 and S2 processes, modulated by the resolution level, which focuses the attention of the creative agent by broadening or narrowing the exploration of novel solutions, or even drawing the solution from a set of already made associations. An example of artificial painter is described in some experimentations by using a robotic platform.
Evidence-based reasoning is at the core of many problem-solving and decision-making tasks in a wide variety of domains. Generalizing from the research and development of cognitive agents in several such domains, this paper presents progress toward a computational theory for the development of instructable cognitive agents for evidence-based reasoning tasks. The paper also illustrates the application of this theory to the development of four prototype cognitive agents in domains that are critical to the government and the public sector. Two agents function as cognitive assistants, one in intelligence analysis, and the other in science education. The other two agents operate autonomously, one in cybersecurity and the other in intelligence, surveillance, and reconnaissance. The paper concludes with the directions of future research on the proposed computational theory.
The accumulation of adaptations in an open-ended manner during lifetime learning is a holy grail in reinforcement learning, intrinsic motivation, artificial curiosity, and developmental robotics. We present a specification for a cognitive architecture that is capable of specifying an unlimited range of behaviors. We then give examples of how it can stochastically explore an interesting space of adjacent possible behaviors. There are two main novelties; the first is a proper definition of the fitness of self-generated games such that interesting games are expected to evolve. The second is a modular and evolvable behavior language that has systematicity, productivity, and compositionality, i.e. it is a physical symbol system. A part of the architecture has already been implemented on a humanoid robot.
Recent years have witnessed the fast development of the emerging topic of Graph Learning based Recommender Systems (GLRS). GLRS employ advanced graph learning approaches to model users preferences and intentions as well as items characteristics for recommendations. Differently from other RS approaches, including content-based filtering and collaborative filtering, GLRS are built on graphs where the important objects, e.g., users, items, and attributes, are either explicitly or implicitly connected. With the rapid development of graph learning techniques, exploring and exploiting homogeneous or heterogeneous relations in graphs are a promising direction for building more effective RS. In this paper, we provide a systematic review of GLRS, by discussing how they extract important knowledge from graph-based representations to improve the accuracy, reliability and explainability of the recommendations. First, we characterize and formalize GLRS, and then summarize and categorize the key challenges and main progress in this novel research area. Finally, we share some new research directions in this vibrant area.
The explorative mind-map is a dynamic framework, that emerges automatically from the input, it gets. It is unlike a verificative modeling system where existing (human) thoughts are placed and connected together. In this regard, explorative mind-maps change their size continuously, being adaptive with connectionist cells inside; mind-maps process data input incrementally and offer lots of possibilities to interact with the user through an appropriate communication interface. With respect to a cognitive motivated situation like a conversation between partners, mind-maps become interesting as they are able to process stimulating signals whenever they occur. If these signals are close to an own understanding of the world, then the conversational partner becomes automatically more trustful than if the signals do not or less match the own knowledge scheme. In this (position) paper, we therefore motivate explorative mind-maps as a cognitive engine and propose these as a decision support engine to foster trust.