ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum-like model for unconscious-conscious interaction and emotional coloring of perceptions and other conscious experiences

101   0   0.0 ( 0 )
 نشر من قبل Andrei Khrennikov Yu
 تاريخ النشر 2021
  مجال البحث علم الأحياء فيزياء
والبحث باللغة English
 تأليف Andrei Khrennikov




اسأل ChatGPT حول البحث

Quantum measurement theory is applied to quantum-like modeling of coherent generation of perceptions and emotions and generally for emotional coloring of conscious experiences. In quantum theory, a system should be separated from an observer. The brain performs self-measurements. To model them, we split the brain into two subsystems, unconsciousness and consciousness. They correspond to a system and an observer. The states of perceptions and emotions are described through the tensor product decomposition of the unconscious state space; similarly, there are two classes of observables, for conscious experiencing of perceptions and emotions, respectively. Emotional coloring is coupled to quantum contextuality: emotional observables determine contexts. Such contextualization reduces degeneration of unconscious states. The quantum-like approach should be distinguished from consideration of the genuine quantum physical processes in the brain (cf. Penrose and Hameroff). In our approach the brain is a macroscopic system which information processing can be described by the formalism of quantum theory.



قيم البحث

اقرأ أيضاً

317 - Jeffrey L. Krichmar 2021
In 2006, during a meeting of a working group of scientists in La Jolla, California at The Neurosciences Institute (NSI), Gerald Edelman described a roadmap towards the creation of a Conscious Artifact. As far as I know, this roadmap was not published . However, it did shape my thinking and that of many others in the years since that meeting. This short paper, which is based on my notes taken during the meeting, describes the key steps in this roadmap. I believe it is as groundbreaking today as it was more than 15 years ago.
We cast aspects of consciousness in axiomatic mathematical terms, using the graphical calculus of general process theories (a.k.a symmetric monoidal categories and Frobenius algebras therein). This calculus exploits the ontological neutrality of proc ess theories. A toy example using the axiomatic calculus is given to show the power of this approach, recovering other aspects of conscious experience, such as external and internal subjective distinction, privacy or unreadability of personal subjective experience, and phenomenal unity, one of the main issues for scientific studies of consciousness. In fact, these features naturally arise from the compositional nature of axiomatic calculus.
493 - Russell K. Standish 2013
Anthropic reasoning is a form of statistical reasoning based upon finding oneself a member of a particular reference class of conscious beings. By considering empirical distribution functions defined over animal life on Earth, we can deduce that the vast bulk of animal life is unlikely to be conscious.
Recent advances in deep learning have made available large, powerful convolutional neural networks (CNN) with state-of-the-art performance in several real-world applications. Unfortunately, these large-sized models have millions of parameters, thus t hey are not deployable on resource-limited platforms (e.g. where RAM is limited). Compression of CNNs thereby becomes a critical problem to achieve memory-efficient and possibly computationally faster model representations. In this paper, we investigate the impact of lossy compression of CNNs by weight pruning and quantization, and lossless weight matrix representations based on source coding. We tested several combinations of these techniques on four benchmark datasets for classification and regression problems, achieving compression rates up to $165$ times, while preserving or improving the model performance.
The question of whether artificial beings or machines could become self-aware or consciousness has been a philosophical question for centuries. The main problem is that self-awareness cannot be observed from an outside perspective and the distinction of whether something is really self-aware or merely a clever program that pretends to do so cannot be answered without access to accurate knowledge about the mechanisms inner workings. We review the current state-of-the-art regarding these developments and investigate common machine learning approaches with respect to their potential ability to become self-aware. We realise that many important algorithmic steps towards machines with a core consciousness have already been devised. For human-level intelligence, however, many additional techniques have to be discovered.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا