Do you want to publish a course? Click here

Understanding understanding: a renormalization group inspired model of (artificial) intelligence

61   0   0.0 ( 0 )
 Added by Antal Jakovac
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper is about the meaning of understanding in scientific and in artificial intelligent systems. We give a mathematical definition of the understanding, where, contrary to the common wisdom, we define the probability space on the input set, and we treat the transformation made by an intelligent actor not as a loss of information, but instead a reorganization of the information in the framework of a new coordinate system. We introduce, following the ideas of physical renormalization group, the notions of relevant and irrelevant parameters, and discuss, how the different AI tasks can be interpreted along these concepts, and how the process of learning can be described. We show, how scientific understanding fits into this framework, and demonstrate, what is the difference between a scientific task and pattern recognition. We also introduce a measure of relevance, which is useful for performing lossy compression.



rate research

Read More

In this document, we explore in more detail our published work (Komorowski, Celi, Badawi, Gordon, & Faisal, 2018) for the benefit of the AI in Healthcare research community. In the above paper, we developed the AI Clinician system, which demonstrated how reinforcement learning could be used to make useful recommendations towards optimal treatment decisions from intensive care data. Since publication a number of authors have reviewed our work (e.g. Abbasi, 2018; Bos, Azoulay, & Martin-Loeches, 2019; Saria, 2018). Given the difference of our framework to previous work, the fact that we are bridging two very different academic communities (intensive care and machine learning) and that our work has impact on a number of other areas with more traditional computer-based approaches (biosignal processing and control, biomedical engineering), we are providing here additional details on our recent publication.
The ability to use symbols is the pinnacle of human intelligence, but has yet to be fully replicated in machines. Here we argue that the path towards symbolically fluent artificial intelligence (AI) begins with a reinterpretation of what symbols are, how they come to exist, and how a system behaves when it uses them. We begin by offering an interpretation of symbols as entities whose meaning is established by convention. But crucially, something is a symbol only for those who demonstrably and actively participate in this convention. We then outline how this interpretation thematically unifies the behavioural traits humans exhibit when they use symbols. This motivates our proposal that the field place a greater emphasis on symbolic behaviour rather than particular computational mechanisms inspired by more restrictive interpretations of symbols. Finally, we suggest that AI research explore social and cultural engagement as a tool to develop the cognitive machinery necessary for symbolic behaviour to emerge. This approach will allow for AI to interpret something as symbolic on its own rather than simply manipulate things that are only symbols to human onlookers, and thus will ultimately lead to AI with more human-like symbolic fluency.
This article reviews the Once learning mechanism that was proposed 23 years ago and the subsequent successes of One-shot learning in image classification and You Only Look Once - YOLO in objective detection. Analyzing the current development of Artificial Intelligence (AI), the proposal is that AI should be clearly divided into the following categories: Artificial Human Intelligence (AHI), Artificial Machine Intelligence (AMI), and Artificial Biological Intelligence (ABI), which will also be the main directions of theory and application development for AI. As a watershed for the branches of AI, some classification standards and methods are discussed: 1) Human-oriented, machine-oriented, and biological-oriented AI R&D; 2) Information input processed by Dimensionality-up or Dimensionality-reduction; 3) The use of one/few or large samples for knowledge learning.
The Light-Up puzzle, also known as the AKARI puzzle, has never been solved using modern artificial intelligence (AI) methods. Currently, the most widely used computational technique to autonomously develop solutions involve evolution theory algorithms. This project is an effort to apply new AI techniques for solving the Light-up puzzle faster and more computationally efficient. The algorithms explored for producing optimal solutions include hill climbing, simulated annealing, feed-forward neural network (FNN), and convolutional neural network (CNN). Two algorithms were developed for hill climbing and simulated annealing using 2 actions (add and remove light bulb) versus 3 actions(add, remove, or move light-bulb to a different cell). Both hill climbing and simulated annealing algorithms showed a higher accuracy for the case of 3 actions. The simulated annealing showed to significantly outperform hill climbing, FNN, CNN, and an evolutionary theory algorithm achieving 100% accuracy in 30 unique board configurations. Lastly, while FNN and CNN algorithms showed low accuracies, computational times were significantly faster compared to the remaining algorithms. The GitHub repository for this project can be found at https://github.com/rperera12/AKARI-LightUp-GameSolver-with-DeepNeuralNetworks-and-HillClimb-or-SimulatedAnnealing.
65 - Steve DiPaola , Liane Gabora , 2018
The common view that our creativity is what makes us uniquely human suggests that incorporating research on human creativity into generative deep learning techniques might be a fruitful avenue for making their outputs more compelling and human-like. Using an original synthesis of Deep Dream-based convolutional neural networks and cognitive based computational art rendering systems, we show how honing theory, intrinsic motivation, and the notion of a seed incident can be implemented computationally, and demonstrate their impact on the resulting generative art. Conversely, we discuss how explorations in deep learn-ing convolutional neural net generative systems can inform our understanding of human creativity. We conclude with ideas for further cross-fertilization between AI based computational creativity and psychology of creativity.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا