No Arabic abstract
Rotating Snakes is a visual illusion in which a stationary design is perceived to move dramatically. In the current study, the mechanism that generates perception of motion was analyzed using a combination of psychophysics experiments and deep neural network models that mimic human vision. We prepared three- and four-color illusion-like designs with a wide range of luminance and measured their strength of induced rotational motion. As a result, we discovered the fundamental law that the effect of the four-color snake rotation illusion was successfully enhanced by the combination of two perceptual motion vectors produced by the two three-color designs. In years to come, deep neural network technology will be one of the most effective tools not only for engineering applications but also for human perception research.
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. We quantify the importance of continuous, real-time perception and its tight integration with reactive motion generation methods in dynamic manipulation scenarios. We compare three different systems that are instantiations of the most common architectures in the field: (i) a traditional sense-plan-act approach that is still widely used, (ii) a myopic controller that only reacts to local environment dynamics and (iii) a reactive planner that integrates feedback control and motion optimization. All architectures rely on the same components for real-time perception and reactive motion generation to allow a quantitative evaluation. We extensively evaluate the systems on a real robotic platform in four scenarios that exhibit either a challenging workspace geometry or a dynamic environment. In 333 experiments, we quantify the robustness and accuracy that is due to integrating real-time feedback at different time scales in a reactive motion generation system. We also report on the lessons learned for system building.
Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition.
We present in this article experimental results obtained with the dispositif de Lenay : for a localization task (distal perception) and an orientation estimation task (proximal perception of the orientation of a cylinder in a plane). In this last experiment, a virtual version of the Lenay device was used. Results are here used to illustrate methodological and theoretical proposals for the study of cognitive and sensori-motor processes involved in perception.
Paul Bach Y Rita [1] is the precursor of sensory substitutions. He started thirty years ago using visuo-tactile prostheses with the intent of satisfying blind people. These prostheses, called Tactile Vision Substitution Systems (TVSS), transform a sensory input from a given modality (vision) into another modality (touch). These new systems seemed to induce quasi-visual perceptions. One of the authors interests dealt with the understanding of the coupling between actions and sensations in perception mechanisms [4]. Throughout his search, he noticed that the subjects had to move the camera themselves in order to recognise a 3D target-object or a figure placed in front of them. Our work consists in understanding how sensory information provided by a visuo-tactile prosthesis can be used for motor behaviour. In this aim, we used the most simple substitution device (one photoreceptor coupled with one tactile stimulator) in order to control and enrich our knowledge of the ties between perception and action.
With the rising societal demand for more information-processing capacity with lower power consumption, alternative architectures inspired by the parallelism and robustness of the human brain have recently emerged as possible solutions. In particular, spiking neural networks (SNNs) offer a bio-realistic approach, relying on pulses analogous to action potentials as units of information. While software encoded networks provide flexibility and precision, they are often computationally expensive. As a result, hardware SNNs based on the spiking dynamics of a device or circuit represent an increasingly appealing direction. Here, we propose to use superconducting nanowires as a platform for the development of an artificial neuron. Building on an architecture first proposed for Josephson junctions, we rely on the intrinsic nonlinearity of two coupled nanowires to generate spiking behavior, and use electrothermal circuit simulations to demonstrate that the nanowire neuron reproduces multiple characteristics of biological neurons. Furthermore, by harnessing the nonlinearity of the superconducting nanowires inductance, we develop a design for a variable inductive synapse capable of both excitatory and inhibitory control. We demonstrate that this synapse design supports direct fanout, a feature that has been difficult to achieve in other superconducting architectures, and that the nanowire neurons nominal energy performance is competitive with that of current technologies.