Do you want to publish a course? Click here

Crowding Reveals Fundamental Differences in Local vs. Global Processing in Humans and Machines

55   0   0.0 ( 0 )
 Added by Alban Bornet
 Publication date 2020
  fields Biology
and research's language is English




Ask ChatGPT about the research

Feedforward Convolutional Neural Networks (ffCNNs) have become state-of-the-art models both in computer vision and neuroscience. However, human-like performance of ffCNNs does not necessarily imply human-like computations. Previous studies have suggested that current ffCNNs do not make use of global shape information. However, it is currently unclear whether this reflects fundamental differences between ffCNN and human processing or is merely an artefact of how ffCNNs are trained. Here, we use visual crowding as a well-controlled, specific probe to test global shape computations. Our results provide evidence that ffCNNs cannot produce human-like global shape computations for principled architectural reasons. We lay out approaches that may address shortcomings of ffCNNs to provide better models of the human visual system.



rate research

Read More

75 - Matej Hoffmann 2020
Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent - yet, as is so often the case, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed. In the biological realm, evidence has been accumulated by diverse disciplines giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables to control the machine. In this article I compare the character of body representations in biology with their robotic counterparts and relate that to the differences in performance that we observe. I put forth a number of axes regarding the nature of such body models: fixed vs. plastic, amodal vs. modal, explicit vs. implicit, serial vs. parallel, modular vs. holistic, and centralized vs. distributed. An interesting trend emerges: on many of the axes, there is a sequence from robot body models, over body image, body schema, to the body representation in lower animals like the octopus. In some sense, robots have a lot in common with Ian Waterman - the man who lost his body - in that they rely on an explicit, veridical body model (body image taken to the extreme) and lack any implicit, multimodal representation (like the body schema) of their bodies. I will then detail how robots can inform the biological sciences dealing with body representations and finally, I will study which of the features of the body in the brain should be transferred to robots, giving rise to more adaptive and resilient, self-calibrating machines.
63 - A Rodriguez , R Granger 2020
Visual clutter affects our ability to see: objects that would be identifiable on their own, may become unrecognizable when presented close together (crowding) -- but the psychophysical characteristics of crowding have resisted simplification. Image properties initially thought to produce crowding have paradoxically yielded unexpected results, e.g., adding flanking objects can ameliorate crowding (Manassi, Sayim et al., 2012; Herzog, Sayim et al., 2015; Pachai, Doerig et al., 2016). The resulting theory revisions have been sufficiently complex and specialized as to make it difficult to discern what principles may underlie the observed phenomena. A generalized formulation of simple visual contrast energy is presented, arising from straightforward analyses of center and surround neurons in the early visual stream. Extant contrast measures, such as RMS contrast, are easily shown to fall out as reduced special cases. The new generalized contrast energy metric surprisingly predicts the principal findings of a broad range of crowding studies. These early crowding phenomena may thus be said to arise predominantly from contrast, or are, at least, severely confounded by contrast effects. (These findings may be distinct from accounts of other, likely downstream, configural or semantic instances of crowding, suggesting at least two separate forms of crowding that may resist unification.) The new fundamental contrast energy formulation provides a candidate explanatory framework that addresses multiple psychophysical phenomena beyond crowding.
Most humans have the good fortune to live their lives embedded in richly structured social groups. Yet, it remains unclear how humans acquire knowledge about these social structures to successfully navigate social relationships. Here we address this knowledge gap with an interdisciplinary neuroimaging study drawing on recent advances in network science and statistical learning. Specifically, we collected BOLD MRI data while participants learned the community structure of both social and non-social networks, in order to examine whether the learning of these two types of networks was differentially associated with functional brain network topology. From the behavioral data in both tasks, we found that learners were sensitive to the community structure of the networks, as evidenced by a slower reaction time on trials transitioning between clusters than on trials transitioning within a cluster. From the neuroimaging data collected during the social network learning task, we observed that the functional connectivity of the hippocampus and temporoparietal junction was significantly greater when transitioning between clusters than when transitioning within a cluster. Furthermore, temporoparietal regions of the default mode were more strongly connected to hippocampus, somatomotor, and visual regions during the social task than during the non-social task. Collectively, our results identify neurophysiological underpinnings of social versus non-social network learning, extending our knowledge about the impact of social context on learning processes. More broadly, this work offers an empirical approach to study the learning of social network structures, which could be fruitfully extended to other participant populations, various graph architectures, and a diversity of social contexts in future studies.
A quantitative understanding of how sensory signals are transformed into motor outputs places useful constraints on brain function and helps reveal the brains underlying computations. We investigate how the nematode C. elegans responds to time-varying mechanosensory signals using a high-throughput optogenetic assay and automated behavior quantification. In the prevailing picture of the touch circuit, the animals behavior is determined by which neurons are stimulated and by the stimulus amplitude. In contrast, we find that the behavioral response is tuned to temporal properties of mechanosensory signals, like its integral and derivative, that extend over many seconds. Mechanosensory signals, even in the same neurons, can be tailored to elicit different behavioral responses. Moreover, we find that the animals response also depends on its behavioral context. Most dramatically, the animal ignores all tested mechanosensory stimuli during turns. Finally, we present a linear-nonlinear model that predicts the animals behavioral response to stimulus.
144 - Hideaki Shimazaki 2019
How do organisms recognize their environment by acquiring knowledge about the world, and what actions do they take based on this knowledge? This article examines hypotheses about organisms adaptation to the environment from machine learning, information-theoretic, and thermodynamic perspectives. We start with constructing a hierarchical model of the world as an internal model in the brain, and review standard machine learning methods to infer causes by approximately learning the model under the maximum likelihood principle. This in turn provides an overview of the free energy principle for an organism, a hypothesis to explain perception and action from the principle of least surprise. Treating this statistical learning as communication between the world and brain, learning is interpreted as a process to maximize information about the world. We investigate how the classical theories of perception such as the infomax principle relates to learning the hierarchical model. We then present an approach to the recognition and learning based on thermodynamics, showing that adaptation by causal learning results in the second law of thermodynamics whereas inference dynamics that fuses observation with prior knowledge forms a thermodynamic process. These provide a unified view on the adaptation of organisms to the environment.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا