No Arabic abstract
Immersion in a creative task can be an intimate experience. It can feel like a mystery: intangible, inexplicable, and beyond the reach of science. However, science is making exciting headway into understanding creativity. While the mind of a highly uncreative individual consists of a collection of items accumulated through direct experience and enculturation, the mind of a creative individual is self-organizing and self-mending; thus, experiences and items of cultural knowledge are thought through from different perspectives such that they cohere together into a loosely integrated whole. The reweaving of items in memory is elicited by perturbations: experiences that increase psychological entropy because they are inconsistent with ones web of understandings. The process of responding to one perturbation often leads to other perturbations, i.e., other inconsistencies in ones web of understandings. Creative thinking often requires the capacity to shift between divergent and convergent modes of thought in response to the ever-changing demands of the creative task. Since uncreative individuals can reap the benefits of creativity by imitating creators, using their inventions, or purchasing their artworks, it is not necessary that everyone be creative. Agent based computer models of cultural evolution suggest that society functions best with a mixture of creative and uncreative individuals. The ideal ratio of creativity to imitation increases in times of change, such as we are experiencing now. Therefore it is important to educate the next generation in ways that foster creativity. The chapter concludes with suggestions for how educational systems can cultivate creativity.
Stationarity of the constituents of the body and of its functionalities is a basic requirement for life, being equivalent to survival in first place. Assuming that the resting state activity of the brain serves essential functionalities, stationarity entails that the dynamics of the brain needs to be regulated on a time-averaged basis. The combination of recurrent and driving external inputs must therefore lead to a non-trivial stationary neural activity, a condition which is fulfilled for afferent signals of varying strengths only close to criticality. In this view, the benefits of working vicinity of a second-order phase transition, such as signal enhancements, are not the underlying evolutionary drivers, but side effects of the requirement to keep the brain functional in first place. It is hence more appropriate to use the term self-regulated in this context, instead of self-organized.
It is largely believed that complex cognitive phenomena require the perfect orchestrated collaboration of many neurons. However, this is not what converging experimental evidence suggests. Single neurons, the so-called concept cells, may be responsible for complex tasks performed by an individual. Here, starting from a few first principles, we layout physical foundations showing that concept cells are not only possible but highly likely, given that neurons work in a high dimensional space.
During slow-wave sleep, the brain is in a self-organized regime in which slow oscillations (SOs) between up- and down-states propagate across the cortex. We address the mechanism of how SOs emerge and can recruit large parts of the brain using a whole-brain model based on empirical connectivity data. Individual brain areas generate SOs that are induced by a local adaptation mechanism. Optimal fits to human resting-state fMRI data and EEG during deep sleep are found at critical values of the adaptation strength where the model produces a balance between local and global SOs with realistic spatiotemporal statistics. Local oscillations are more frequent, last shorter, and have a lower amplitude. Global oscillations spread as waves of silence across the brain, traveling from anterior to posterior regions due to the heterogeneous network structure of the human brain. Our results demonstrate the utility of whole-brain models for explaining the origin of large-scale cortical oscillations and how they are shaped by the connectome.
Neurons modeled by the Rulkov map display a variety of dynamic regimes that include tonic spikes and chaotic bursting. Here we study an ensemble of bursting neurons coupled with the Watts-Strogatz small-world topology. We characterize the sequences of bursts using the symbolic method of time-series analysis known as ordinal analysis, which detects nonlinear temporal correlations. We show that the probabilities of the different symbols distinguish different dynamical regimes, which depend on the coupling strength and the network topology. These regimes have different spatio-temporal properties that can be visualized with raster plots.
We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow/fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix on to a distance matrix in a high-dimensional space, with the neurons labelled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the non-convolutional part of the connectivity which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex. In addition, we establish that the non-convolutional (or long-range) connectivity is patchy, and is co-aligned in the case of orientation learning.