Do you want to publish a course? Click here

Physics of the mind: Concepts, emotions, language, cognition, consciousness, beauty, music, and symbolic culture

289   0   0.0 ( 0 )
 Added by Leonid Perlovsky
 Publication date 2010
  fields Biology
and research's language is English




Ask ChatGPT about the research

Mathematical approaches to modeling the mind since the 1950s are reviewed. Difficulties faced by these approaches are related to the fundamental incompleteness of logic discovered by K. Godel. A recent mathematical advancement, dynamic logic (DL) overcame these past difficulties. DL is described conceptually and related to neuroscience, psychology, cognitive science, and philosophy. DL models higher cognitive functions: concepts, emotions, instincts, understanding, imagination, intuition, consciousness. DL is related to the knowledge instinct that drives our understanding of the world and serves as a foundation for higher cognitive functions. Aesthetic emotions and perception of beauty are related to everyday functioning of the mind. The article reviews mechanisms of human symbolic ability, language and cognition, joint evolution of the mind, consciousness, and cultures. It touches on a manifold of aesthetic emotions in music, their cognitive function, origin, and evolution. The article concentrates on elucidating the first principles and reviews aspects of the theory proven in laboratory research.



rate research

Read More

163 - Leonid Perlovsky 2010
The paper discusses relationships between aesthetics theory and mathematical models of mind. Mathematical theory describes abilities for concepts, emotions, instincts, imagination, adaptation, learning, cognition, language, approximate hierarchy of the mind and evolution of these abilities. The knowledge instinct is the foundation of higher mental abilities and aesthetic emotions. Aesthetic emotions are present in every act of perception and cognition, and at the top of the mind hierarchy they become emotions of the beautiful. The learning ability is essential to everyday perception and cognition as well as to the historical development of understanding of the meaning of life. I discuss a controversy surrounding this issue. Conclusions based on cognitive and mathematical models confirm that judgments of taste are at once subjective and objective, and I discuss what it means. The paper relates cognitive and mathematical concepts to those of philosophy and aesthetics, from Plato to our days, clarifies cognitive mechanisms and functions of the beautiful, and resolves many difficulties of contemporary aesthetics.
Embodied cognition states that semantics is encoded in the brain as firing patterns of neural circuits, which are learned according to the statistical structure of human multimodal experience. However, each human brain is idiosyncratically biased, according to its subjective experience history, making this biological semantic machinery noisy with respect to the overall semantics inherent to media artifacts, such as music and language excerpts. We propose to represent shared semantics using low-dimensional vector embeddings by jointly modeling several brains from human subjects. We show these unsupervised efficient representations outperform the original high-dimensional fMRI voxel spaces in proxy music genre and language topic classification tasks. We further show that joint modeling of several subjects increases the semantic richness of the learned latent vector spaces.
We construct a complexity-based morphospace to study systems-level properties of conscious & intelligent systems. The axes of this space label 3 complexity types: autonomous, cognitive & social. Given recent proposals to synthesize consciousness, a generic complexity-based conceptualization provides a useful framework for identifying defining features of conscious & synthetic systems. Based on current clinical scales of consciousness that measure cognitive awareness and wakefulness, we take a perspective on how contemporary artificially intelligent machines & synthetically engineered life forms measure on these scales. It turns out that awareness & wakefulness can be associated to computational & autonomous complexity respectively. Subsequently, building on insights from cognitive robotics, we examine the function that consciousness serves, & argue the role of consciousness as an evolutionary game-theoretic strategy. This makes the case for a third type of complexity for describing consciousness: social complexity. Having identified these complexity types, allows for a representation of both, biological & synthetic systems in a common morphospace. A consequence of this classification is a taxonomy of possible conscious machines. We identify four types of consciousness, based on embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii) group consciousness (resulting from group interactions), & (iv) simulated consciousness (embodied by virtual agents within a simulated reality). This taxonomy helps in the investigation of comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the crossroads of cognitive neuroscience, biomedical engineering, artificial intelligence & biomimetics.
Scientific studies of consciousness rely on objects whose existence is assumed to be independent of any consciousness. On the contrary, we assume consciousness to be fundamental, and that one of the main features of consciousness is characterized as being other-dependent. We set up a framework which naturally subsumes this feature by defining a compact closed category where morphisms represent conscious processes. These morphisms are a composition of a set of generators, each being specified by their relations with other generators, and therefore co-dependent. The framework is general enough and fits well into a compositional model of consciousness. Interestingly, we also show how our proposal may become a step towards avoiding the hard problem of consciousness, and thereby address the combination problem of conscious experiences.
88 - Richard Granger 2020
The machinery of the human brain -- analog, probabilistic, embodied -- can be characterized computationally, but what machinery confers what computational powers? Any such system can be abstractly cast in terms of two computational components: a finite state machine carrying out computational steps, whether via currents, chemistry, or mechanics; plus a set of allowable memory operations, typically formulated in terms of an information store that can be read from and written to, whether via synaptic change, state transition, or recurrent activity. Probing these mechanisms for their information content, we can capture the difference in computational power that various systems are capable of. Most human cognitive abilities, from perception to action to memory, are shared with other species; we seek to characterize those (few) capabilities that are ubiquitously present among humans and absent from other species. Three realms of formidable constraints -- a) measurable human cognitive abilities, b) measurable allometric anatomic brain characteristics, and c) measurable features of specific automata and formal grammars -- illustrate remarkably sharp restrictions on human abilities, unexpectedly confining human cognition to a specific class of automata (nested stack), which are markedly below Turing machines.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا