Do you want to publish a course? Click here

Information Closure Theory of Consciousness

113   0   0.0 ( 0 )
 Added by Acer Chang Yu-Chan
 Publication date 2019
  fields Biology
and research's language is English




Ask ChatGPT about the research

Information processing in neural systems can be described and analysed at multiple spatiotemporal scales. Generally, information at lower levels is more fine-grained and can be coarse-grained in higher levels. However, information processed only at specific levels seems to be available for conscious awareness. We do not have direct experience of information available at the level of individual neurons, which is noisy and highly stochastic. Neither do we have experience of more macro-level interactions such as interpersonal communications. Neurophysiological evidence suggests that conscious experiences co-vary with information encoded in coarse-grained neural states such as the firing pattern of a population of neurons. In this article, we introduce a new informational theory of consciousness: Information Closure Theory of Consciousness (ICT). We hypothesise that conscious processes are processes which form non-trivial informational closure (NTIC) with respect to the environment at certain coarse-grained levels. This hypothesis implies that conscious experience is confined due to informational closure from conscious processing to other coarse-grained levels. ICT proposes new quantitative definitions of both conscious content and conscious level. With the parsimonious definitions and a hypothesise, ICT provides explanations and predictions of various phenomena associated with consciousness. The implications of ICT naturally reconciles issues in many existing theories of consciousness and provides explanations for many of our intuitions about consciousness. Most importantly, ICT demonstrates that information can be the common language between consciousness and physical reality.



rate research

Read More

Scientific studies of consciousness rely on objects whose existence is assumed to be independent of any consciousness. On the contrary, we assume consciousness to be fundamental, and that one of the main features of consciousness is characterized as being other-dependent. We set up a framework which naturally subsumes this feature by defining a compact closed category where morphisms represent conscious processes. These morphisms are a composition of a set of generators, each being specified by their relations with other generators, and therefore co-dependent. The framework is general enough and fits well into a compositional model of consciousness. Interestingly, we also show how our proposal may become a step towards avoiding the hard problem of consciousness, and thereby address the combination problem of conscious experiences.
We construct a complexity-based morphospace to study systems-level properties of conscious & intelligent systems. The axes of this space label 3 complexity types: autonomous, cognitive & social. Given recent proposals to synthesize consciousness, a generic complexity-based conceptualization provides a useful framework for identifying defining features of conscious & synthetic systems. Based on current clinical scales of consciousness that measure cognitive awareness and wakefulness, we take a perspective on how contemporary artificially intelligent machines & synthetically engineered life forms measure on these scales. It turns out that awareness & wakefulness can be associated to computational & autonomous complexity respectively. Subsequently, building on insights from cognitive robotics, we examine the function that consciousness serves, & argue the role of consciousness as an evolutionary game-theoretic strategy. This makes the case for a third type of complexity for describing consciousness: social complexity. Having identified these complexity types, allows for a representation of both, biological & synthetic systems in a common morphospace. A consequence of this classification is a taxonomy of possible conscious machines. We identify four types of consciousness, based on embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii) group consciousness (resulting from group interactions), & (iv) simulated consciousness (embodied by virtual agents within a simulated reality). This taxonomy helps in the investigation of comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the crossroads of cognitive neuroscience, biomedical engineering, artificial intelligence & biomimetics.
Evidence suggests that disruptions of the posteromedial cortex (PMC) and posteromedial corticothalamic connectivity contribute to disorders of consciousness (DOCs). While most previous studies treated the PMC as a whole, this structure is functionally heterogeneous. The present study investigated whether particular subdivisions of the PMC are specifically associated with DOCs. Participants were DOC patients, 21 vegetative state/unresponsive wakefulness syndrome (VS/UWS), 12 minimally conscious state (MCS), and 29 healthy controls. Individual PMC and thalamus were divided into distinct subdivisions by their fiber tractograpy to each other and default mode regions, and white matter integrity and brain activity between/within subdivisions were assessed. The thalamus was represented mainly in the dorsal and posterior portions of the PMC, and the white matter tracts connecting these subdivisions to the thalamus had less integrity in VS/UWS patients than in MCS patients and healthy controls, as well as in patients who did not recover after 12 months than in patients who did. The structural substrates were validated by finding impaired functional fluctuations within this PMC subdivision. This study is the first to show that tracts from dorsal and posterior subdivisions of the PMC to the thalamus contribute to DOCs.
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ($Phi$) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of $Phi$ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of $Phi$ is submodular, the lat
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا