Do you want to publish a course? Click here

Transfer entropy in continuous time, with applications to jump and neural spiking processes

119   0   0.0 ( 0 )
 Added by Richard Spinney
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. While transfer entropy was originally formulated in discrete time, in this paper we provide a framework for considering transfer entropy in continuous time systems, based on Radon-Nikodym derivatives between measures of complete path realizations. To describe the information dynamics of individual path realizations, we introduce the pathwise transfer entropy, the expectation of which is the transfer entropy accumulated over a finite time interval. We demonstrate that this formalism permits an instantaneous transfer entropy rate. These properties are analogous to the behavior of physical quantities defined along paths such as work and heat. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely, that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes). Numerical schemes based on our formalism promise significant benefits over existing strategies based on discrete time formalisms.



rate research

Read More

The characterisation of information processing is an important task in complex systems science. Information dynamics is a quantitative methodology for modelling the intrinsic information processing conducted by a process represented as a time series, but to date has only been formulated in discrete time. Building on previous work which demonstrated how to formulate transfer entropy in continuous time, we give a total account of information processing in this setting, incorporating information storage. We find that a convergent rate of predictive capacity, comprised of the transfer entropy and active information storage, does not exist, arising through divergent rates of active information storage. We identify that active information storage can be decomposed into two separate quantities that characterise predictive capacity stored in a process: active memory utilisation and instantaneous predictive capacity. The latter involves prediction related to path regularity and so solely inherits the divergent properties of the active information storage, whilst the former permits definitions of pathwise and rate quantities. We formulate measures of memory utilisation for jump and neural spiking processes and illustrate measures of information processing in synthetic neural spiking models and coupled Ornstein-Uhlenbeck models. The application to synthetic neural spiking models demonstrates that active memory utilisation for point processes consists of discontinuous jump contributions (at spikes) interrupting a continuously varying contribution (relating to waiting times between spikes), complementing the behaviour previously demonstrated for transfer entropy in these processes.
This paper describes the structure of solutions to Kolmogorovs equations for nonhomogeneous jump Markov processes and applications of these results to control of jump stochastic systems. These equations were studied by Feller (1940), who clarified in 1945 in the errata to that paper that some of its results covered only nonexplosive Markov processes. We present the results for possibly explosive Markov processes. The paper is based on the invited talk presented by the authors at the International Conference dedicated to the 200th anniversary of the birth of P. L.~Chebyshev.
109 - Robert Shour 2012
Two principles explain emergence. First, in the Receipts reference frame, Deg(S) = 4/3 Deg(R), where Supply S is an isotropic radiative energy source, Receipt R receives Ss energy, and Deg is a systems degrees of freedom based on its mean path length. Ss 1/3 more degrees of freedom relative to R enables Rs growth and increasing complexity. Second, rho(R) = Deg(R) times rho(r), where rho(R) represents the collective rate of R and rho(r) represents the rate of an individual in R: as Deg(R) increases due to the first principle, the multiplier effect of networking in R increases. A universe like ours with isotropic energy distribution, in which both principles are operative, is therefore predisposed to exhibit emergence, and, for reasons shown, a ubiquitous role for the natural logarithm.
Considering the widespread use of effective capacity in cross-layer design and the extensive existence of renewal service processes in communication networks, this paper thoroughly investigates the effective capacity for renewal processes. Based on Z-transform, we derive exact analytical expressions for the effective capacity at a given quality of service (QoS) exponent for both the renewal processes with constant reward and with variable rewards. Unlike prior literature that the effective capacity is approximated with no many insightful discussions, our expression is simple and reveals further meaningful results, such as the monotonicity and bounds of effective capacity. The analytical results are then applied to evaluate the cross-layer throughput for diverse hybrid automatic repeat request (HARQ) systems, including fixed-rate HARQ (FR-HARQ, e.g., Type I HARQ, HARQ with chase combining (HARQ-CC) and HARQ with incremental redundancy (HARQ-IR)), variable-rate HARQ (VR-HARQ) and cross-packet HARQ (XP-HARQ). Numerical results corroborate the analytical ones and prove the superiority of our proposed approach. Furthermore, targeting at maximizing the effective capacity via the optimal rate selection, it is revealed that VR-HARQ and XP-HARQ attain almost the same performance, and both of them perform better than FR-HARQ.
63 - Igal Sason 2018
This paper provides tight bounds on the Renyi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one-to-one. To that end, a tight lower bound on the Renyi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al., which is focused on the Shannon entropy, and it strengthens and generalizes the results of that paper to Renyi entropies of arbitrary positive orders. In view of these generalized bounds and the works by Arikan and Campbell, non-asymptotic bounds are derived for guessing moments and lossless data compression of discrete memoryless sources.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا