ترغب بنشر مسار تعليمي؟ اضغط هنا

Remarks on Feedforward Circuits, Adaptation, and Pulse Memory

104   0   0.0 ( 0 )
 نشر من قبل Eduardo D. Sontag
 تاريخ النشر 2009
  مجال البحث علم الأحياء
والبحث باللغة English
 تأليف Eduardo D. Sontag




اسأل ChatGPT حول البحث

This note studies feedforward circuits as models for perfect adaptation to step signals in biological systems. A global convergence theorem is proved in a general framework, which includes examples from the literature as particular cases. A notable aspect of these circuits is that they do not adapt to pulse signals, because they display a memory phenomenon. Estimates are given of the magnitude of this effect.

قيم البحث

اقرأ أيضاً

This note works out an advection-diffusion approximation to the density of a population of E. coli bacteria undergoing chemotaxis in a one-dimensional space. Simulations show the high quality of predictions under a shallow-gradient regime.
52 - Eduardo D. Sontag 2016
This note analyzes incoherent feedforward loops in signal processing and control. It studies the response properties of IFFLs to exponentially growing inputs, both for a standard version of the IFFL and for a variation in which the output variable ha s a positive self-feedback term. It also considers a negative feedback configuration, using such a device as a controller. It uncovers a somewhat surprising phenomenon in which stabilization is only possible in disconnected regions of parameter space, as the controlled systems growth rate is varied.
We study the problem of sparse nonlinear model recovery of high dimensional compositional functions. Our study is motivated by emerging opportunities in neuroscience to recover fine-grained models of biological neural circuits using collected measure ment data. Guided by available domain knowledge in neuroscience, we explore conditions under which one can recover the underlying biological circuit that generated the training data. Our results suggest insights of both theoretical and practical interests. Most notably, we find that a sign constraint on the weights is a necessary condition for system recovery, which we establish both theoretically with an identifiability guarantee and empirically on simulated biological circuits. We conclude with a case study on retinal ganglion cell circuits using data collected from mouse retina, showcasing the practical potential of this approach.
Animals form groups for many reasons but there are costs and benefit associated with group formation. One of the benefits is collective memory. In groups on the move, social interactions play a crucial role in the cohesion and the ability to make con sensus decisions. When migrating from spawning to feeding areas fish schools need to retain a collective memory of the destination site over thousand of kilometers and changes in group formation or individual preference can produce sudden changes in migration pathways. We propose a modelling framework, based on stochastic adaptive networks, that can reproduce this collective behaviour. We assume that three factors control group formation and school migration behaviour: the intensity of social interaction, the relative number of informed individuals and the preference that each individual has for the particular migration area. We treat these factors independently and relate the individuals preferences to the experience and memory for certain migration sites. We demonstrate that removal of knowledgable individuals or alteration of individual preference can produce rapid changes in group formation and collective behavior. For example, intensive fishing targeting the migratory species and also their preferred prey can reduce both terms to a point at which migration to the destination sites is suddenly stopped. The conceptual approaches represented by our modelling framework may therefore be able to explain large-scale changes in fish migration and spatial distribution.
We introduce a new structure for memory neural networks, called feedforward sequential memory networks (FSMN), which can learn long-term dependency without using recurrent feedback. The proposed FSMN is a standard feedforward neural networks equipped with learnable sequential memory blocks in the hidden layers. In this work, we have applied FSMN to several language modeling (LM) tasks. Experimental results have shown that the memory blocks in FSMN can learn effective representations of long history. Experiments have shown that FSMN based language models can significantly outperform not only feedforward neural network (FNN) based LMs but also the popular recurrent neural network (RNN) LMs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا