ترغب بنشر مسار تعليمي؟ اضغط هنا

Event occurrence is not only subject to the environmental changes, but is also facilitated by the events that have occurred in a system. Here, we develop a method for estimating such extrinsic and intrinsic factors from a single series of event-occur rence times. The analysis is performed using a model that combines the inhomogeneous Poisson process and the Hawkes process, which represent exogenous fluctuations and endogenous chain-reaction mechanisms, respectively. The model is fit to a given dataset by minimizing the free energy, for which statistical physics and a path-integral method are utilized. Because the process of event occurrence is stochastic, parameter estimation is inevitably accompanied by errors, and it can ultimately occur that exogenous and endogenous factors cannot be captured even with the best estimator. We obtained four regimes categorized according to whether respective factors are detected. By applying the analytical method to real time series of debate in a social-networking service, we have observed that the estimated exogenous and endogenous factors are close to the first comments and the follow-up comments, respectively. This method is general and applicable to a variety of data, and we have provided an application program, by which anyone can analyze any series of event times.
The occurrence of new events in a system is typically driven by external causes and by previous events taking place inside the system. This is a general statement, applying to a range of situations including, more recently, to the activity of users i n Online social networks (OSNs). Here we develop a method for extracting from a series of posting times the relative contributions of exogenous, e.g. news media, and endogenous, e.g. information cascade. The method is based on the fitting of a generalized linear model (GLM) equipped with a self-excitation mechanism. We test the method with synthetic data generated by a nonlinear Hawkes process, and apply it to a real time series of tweets with a given hashtag. In the empirical dataset, the estimated contributions of exogenous and endogenous volumes are close to the amounts of original tweets and retweets respectively. We conclude by discussing the possible applications of the method, for instance in online marketing.
We propose a statistical model for networks of event count sequences built on a cascade structure. We assume that each event triggers successor events, whose counts follow additive probability distributions; the ensemble of counts is given by their s uperposition. These assumptions allow the marginal distribution of count sequences and the conditional distribution of event cascades to take analytic forms. We present our model framework using Poisson and negative binomial distributions as the building blocks. Based on this formulation, we describe a statistical method for estimating the model parameters and event cascades from the observed count sequences.
120 - Shinsuke Koyama 2016
This study concerns online inference (i.e., filtering) on the state of reaction networks, conditioned on noisy and partial measurements. The difficulty in deriving the equation that the conditional probability distribution of the state satisfies stem s from the fact that the master equation, which governs the evolution of the reaction networks, is analytically intractable. The linear noise approximation (LNA) technique, which is widely used in the analysis of reaction networks, has recently been applied to develop approximate inference. Here, we apply the projection method to derive approximate filters, and compare them to a filter based on the LNA numerically in their filtering performance. We also contrast the projection method with moment-closure techniques in terms of approximating the evolution of stochastic reaction networks.
76 - Shinsuke Koyama 2014
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a powe r function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship, and based on this a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
Fluctuation scaling has been observed universally in a wide variety of phenomena. In time series that describe sequences of events, fluctuation scaling is expressed as power function relationships between the mean and variance of either inter-event i ntervals or counting statistics, depending on measurement variables. In this article, fluctuation scaling has been formulated for a series of events in which scaling laws in the inter-event intervals and counting statistics were related. We have considered the first-passage time of an Ornstein-Uhlenbeck process and used a conductance-based neuron model with excitatory and inhibitory synaptic inputs to demonstrate the emergence of fluctuation scaling with various exponents, depending on the input regimes and the ratio between excitation and inhibition. Furthermore, we have discussed the possible implication of these results in the context of neural coding.
218 - Shinsuke Koyama 2013
The fluctuation scaling law has universally been observed in a wide variety of phenomena. For counting processes describing the number of events occurred during time intervals, it is expressed as a power function relationship between the variance and the mean of the event count per unit time, the characteristic exponent of which is obtained theoretically in the limit of long duration of counting windows. Here I show that the scaling law effectively appears even in a short timescale in which only a few events occur. Consequently, the counting statistics of nonstationary event sequences are shown to exhibit the scaling law as well as the dynamics at temporal resolution of this timescale. I also propose a method to extract in a systematic manner the characteristic scaling exponent from nonstationary data.
The authors previously considered a method solving optimization problems by using a system of interconnected network of two component Bose-Einstein condensates (Byrnes, Yan, Yamamoto New J. Phys. 13, 113025 (2011)). The use of bosonic particles was f ound to give a reduced time proportional to the number of bosons N for solving Ising model Hamiltonians by taking advantage of enhanced bosonic cooling rates. In this paper we consider the same system in terms of neural networks. We find that up to the accelerated cooling of the bosons the previously proposed system is equivalent to a stochastic continuous Hopfield network. This makes it clear that the BEC network is a physical realization of a simulated annealing algorithm, with an additional speedup due to bosonic enhancement. We discuss the BEC network in terms of typical neural network tasks such as learning and pattern recognition and find that the latter process may be accelerated by a factor of N.
173 - Shinsuke Koyama 2012
Neural coding is a field of study that concerns how sensory information is represented in the brain by networks of neurons. The link between external stimulus and neural response can be studied from two parallel points of view. The first, neural enco ding refers to the mapping from stimulus to response, and primarily focuses on understanding how neurons respond to a wide variety of stimuli, and on constructing models that accurately describe the stimulus-response relationship. Neural decoding, on the other hand, refers to the reverse mapping, from response to stimulus, where the challenge is to reconstruct a stimulus from the spikes it evokes. Since neuronal response is stochastic, a one-to-one mapping of stimuli into neural responses does not exist, causing a mismatch between the two viewpoints of neural coding. Here, we use these two perspectives to investigate the question of what rate coding is, in the simple setting of a single stationary stimulus parameter and a single stationary spike train represented by a renewal process. We show that when rate codes are defined in terms of encoding, i.e., the stimulus parameter is mapped onto the mean firing rate, the rate decoder given by spike counts or the sample mean, does not always efficiently decode the rate codes, but can improve efficiency in reading certain rate codes, when correlations within a spike train are taken into account.
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplaces method, an asymptotic series expansion, to approximate the states conditional mean and variance, together with a Gaussian conditional distribution. This {em Laplace-Gaussian filter} (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا