Simulating the Behavior of an Integrate-and-Fire Neuron


Abstract in English

Video image data can be analyzed and processed in many ways. This research explores the extent at which spiking neurons, which are designed along the Hodgkin-Huxley model, are suitable for this task. The simulations reported in this research consider integrate-and-fire neurons constant and alternating input currents, as well as pixel-intensity driven inputs. Currently, the simulation software employs 64 independently operating spiking neurons that process image data taken every 25 ms. In order to define the response of these neurons, the experiments were done on 100 digital images which include different illuminations, contrast, and saturation situations. The results show that the integrate-and-fire-neuron is highly sensitive to the changes in the intensity of pixels if its parameters are properly set. So in many applications, such as "Saliency Maps", which highly depend on the intensity values of a set of pixels, a neural network made of this neuron will perfectly fit.

References used

LIN, I-JONG, and S. Y. KUNG., “Video Object Extraction and Representation: Theory and Ap-plications” , Boston, MA: Kluwer Academic, 2000
HANJALIC, A., “Content-based Analysis of Digital Video”, Boston: Kluwer Academic, 2004
WEN, ZHEN, and THOMAS S. HUANG, “3D Face Processing: Modeling, Analysis, and Synthesis”, Boston: Kluwer Academic, 2004
http://en.wikipedia.org/wiki/Biological_neuron_model, downloaded at 17/1/2013
S. KOSLOW and S. SUBRAMANIAM, EDS, “Electrophysiological Models In: Databasing the Brain: From Data to Knowledge”, Wiley, New York, Nelson, M.E., 2004

Download