Do you want to publish a course? Click here

A power-law decay model with autocorrelation for posting data to social networking services

179   0   0.0 ( 0 )
 Added by Akimichi Takemura
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

We propose a power-law decay model with autocorrelation for posting data to social networking services concerning particular events such as national holidays or major sport events. In these kinds of events we observe peoples interest both before and after the events. In our model the number of postings has a Poisson distribution whose expected value decays as a power law. Our model also incorporates autocorrelations by autoregressive specification of the expected value. We show that our proposed model well fits the data from social networking services.



rate research

Read More

260 - Akimichi Takemura 2015
We present a short proof of the fact that the exponential decay rate of partial autocorrelation coefficients of a short-memory process, in particular an ARMA process, is equal to the exponential decay rate of the coefficients of its infinite autoregressive representation.
We prove the consistency of the Power-Law Fit PLFit method proposed by Clauset et al.(2009) to estimate the power-law exponent in data coming from a distribution function with regularly-varying tail. In the complex systems community, PLFit has emerged as the method of choice to estimate the power-law exponent. Yet, its mathematical properties are still poorly understood. The difficulty in PLFit is that it is a minimum-distance estimator. It first chooses a threshold that minimizes the Kolmogorov-Smirnov distance between the data points larger than the threshold and the Pareto tail, and then applies the Hill estimator to this restricted data. Since the number of order statistics used is random, the general theory of consistency of power-law exponents from extreme value theory does not apply. Our proof consists in first showing that the Hill estimator is consistent for general intermediate sequences for the number of order statistics used, even when that number is random. Here, we call a sequence intermediate when it grows to infinity, while remaining much smaller than the sample size. The second, and most involved, step is to prove that the optimizer in PLFit is with high probability an intermediate sequence, unless the distribution has a Pareto tail above a certain value. For the latter special case, we give a separate proof.
130 - Lutz Duembgen , Jon A. Wellner , 2015
In this note we prove the following law of the iterated logarithm for the Grenander estimator of a monotone decreasing density: If $f(t_0) > 0$, $f(t_0) < 0$, and $f$ is continuous in a neighborhood of $t_0$, then begin{eqnarray*} limsup_{nrightarrow infty} left ( frac{n}{2log log n} right )^{1/3} ( widehat{f}_n (t_0 ) - f(t_0) ) = left| f(t_0) f(t_0)/2 right|^{1/3} 2M end{eqnarray*} almost surely where $ M equiv sup_{g in {cal G}} T_g = (3/4)^{1/3}$ and $ T_g equiv mbox{argmax}_u { g(u) - u^2 } $; here ${cal G}$ is the two-sided Strassen limit set on $R$. The proof relies on laws of the iterated logarithm for local empirical processes, Groenebooms switching relation, and properties of Strassens limit set analogous to distributional properties of Brownian motion.
118 - Xin Gao , Grace Y. Yi 2012
This paper investigates the property of the penalized estimating equations when both the mean and association structures are modelled. To select variables for the mean and association structures sequentially, we propose a hierarchical penalized generalized estimating equations (HPGEE2) approach. The first set of penalized estimating equations is solved for the selection of significant mean parameters. Conditional on the selected mean model, the second set of penalized estimating equations is solved for the selection of significant association parameters. The hierarchical approach is designed to accommodate possible model constraints relating the inclusion of covariates into the mean and the association models. This two-step penalization strategy enjoys a compelling advantage of easing computational burdens compared to solving the two sets of penalized equations simultaneously. HPGEE2 with a smoothly clipped absolute deviation (SCAD) penalty is shown to have the oracle property for the mean and association models. The asymptotic behavior of the penalized estimator under this hierarchical approach is established. An efficient two-stage penalized weighted least square algorithm is developed to implement the proposed method. The empirical performance of the proposed HPGEE2 is demonstrated through Monte-Carlo studies and the analysis of a clinical data set.
The divide and conquer method is a common strategy for handling massive data. In this article, we study the divide and conquer method for cubic-rate estimators under the massive data framework. We develop a general theory for establishing the asymptotic distribution of the aggregated M-estimators using a simple average. Under certain condition on the growing rate of the number of subgroups, the resulting aggregated estimators are shown to have faster convergence rate and asymptotic normal distribution, which are more tractable in both computation and inference than the original M-estimators based on pooled data. Our theory applies to a wide class of M-estimators with cube root convergence rate, including the location estimator, maximum score estimator and value search estimator. Empirical performance via simulations also validate our theoretical findings.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا