ترغب بنشر مسار تعليمي؟ اضغط هنا

Spiking Neural Networks modelled as Timed Automata with parameter learning

174   0   0.0 ( 0 )
 نشر من قبل Cinzia Di Giusto
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we present a novel approach to automatically infer parameters of spiking neural networks. Neurons are modelled as timed automata waiting for inputs on a number of different channels (synapses), for a given amount of time (the accumulation period). When this period is over, the current potential value is computed considering current and past inputs. If this potential overcomes a given threshold, the automaton emits a broadcast signal over its output channel , otherwise it restarts another accumulation period. After each emission, the automaton remains inactive for a fixed refractory period. Spiking neural networks are formalised as sets of automata, one for each neuron, running in parallel and sharing channels according to the network structure. Such a model is formally validated against some crucial properties defined via proper temporal logic formulae. The model is then exploited to find an assignment for the synaptical weights of neural networks such that they can reproduce a given behaviour. The core of this approach consists in identifying some correcting actions adjusting synaptical weights and back-propagating them until the expected behaviour is displayed. A concrete case study is discussed.



قيم البحث

اقرأ أيضاً

Active learning of timed languages is concerned with the inference of timed automata from observed timed words. The agent can query for the membership of words in the target language, or propose a candidate model and verify its equivalence to the tar get. The major difficulty of this framework is the inference of clock resets, central to the dynamics of timed automata, but not directly observable. Interesting first steps have already been made by restricting to the subclass of event-recording automata, where clock resets are tied to observations. In order to advance towards learning of general timed automata, we generalize this method to a new class, called reset-free event-recording automata, where some transitions may reset no clocks. This offers the same challenges as generic timed automata while keeping the simpler framework of event-recording automata for the sake of readability. Central to our contribution is the notion of invalidity, and the algorithm and data structures to deal with it, allowing on-the-fly detection and pruning of reset hypotheses that contradict observations, a key to any efficient active-learning procedure for generic timed automata.
We present TarTar, an automatic repair analysis tool that, given a timed diagnostic trace (TDT) obtained during the model checking of a timed automaton model, suggests possible syntactic repairs of the analyzed model. The suggested repairs include mo dified values for clock bounds in location invariants and transition guards, adding or removing clock resets, etc. The proposed repairs are guaranteed to eliminate executability of the given TDT, while preserving the overall functional behavior of the system. We give insights into the design and architecture of TarTar, and show that it can successfully repair 69% of the seeded errors in system models taken from a diverse suite of case studies.
Mean-payoff games on timed automata are played on the infinite weighted graph of configurations of priced timed automata between two players, Player Min and Player Max, by moving a token along the states of the graph to form an infinite run. The goal of Player Min is to minimize the limit average weight of the run, while the goal of the Player Max is the opposite. Brenguier, Cassez, and Raskin recently studied a variation of these games and showed that mean-payoff games are undecidable for timed automata with five or more clocks. We refine this result by proving the undecidability of mean-payoff games with three clocks. On a positive side, we show the decidability of mean-payoff games on one-clock timed automata with binary price-rates. A key contribution of this paper is the application of dynamic programming based proof techniques applied in the context of average reward optimization on an uncountable state and action space.
As the limits of traditional von Neumann computing come into view, the brains ability to communicate vast quantities of information using low-power spikes has become an increasing source of inspiration for alternative architectures. Key to the succes s of these largescale neural networks is a power-efficient spiking element that is scalable and easily interfaced with traditional control electronics. In this work, we present a spiking element fabricated from superconducting nanowires that has pulse energies on the order of ~10 aJ. We demonstrate that the device reproduces essential characteristics of biological neurons, such as a refractory period and a firing threshold. Through simulations using experimentally measured device parameters, we show how nanowire-based networks may be used for inference in image recognition, and that the probabilistic nature of nanowire switching may be exploited for modeling biological processes and for applications that rely on stochasticity.
In this paper, we present connections between three models used in different research fields: weighted finite automata~(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks which encompas ses a set of optimization techniques for high-order tensors used in quantum physics and numerical analysis. We first present an intrinsic relation between WFA and the tensor train decomposition, a particular form of tensor network. This relation allows us to exhibit a novel low rank structure of the Hankel matrix of a function computed by a WFA and to design an efficient spectral learning algorithm leveraging this structure to scale the algorithm up to very large Hankel matrices. We then unravel a fundamental connection between WFA and second-order recurrent neural networks~(2-RNN): in the case of sequences of discrete symbols, WFA and 2-RNN with linear activation functions are expressively equivalent. Furthermore, we introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous input vectors. This algorithm relies on estimating low rank sub-blocks of the Hankel tensor, from which the parameters of a linear 2-RNN can be provably recovered. The performances of the proposed learning algorithm are assessed in a simulation study on both synthetic and real-world data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا