Do you want to publish a course? Click here

Relation between the rate of convergence of strong law of large numbers and the rate of concentration of Bayesian prior in game-theoretic probability

216   0   0.0 ( 0 )
 Added by Akimichi Takemura
 Publication date 2016
  fields
and research's language is English




Ask ChatGPT about the research

We study the behavior of the capital process of a continuous Bayesian mixture of fixed proportion betting strategies in the one-sided unbounded forecasting game in game-theoretic probability. We establish the relation between the rate of convergence of the strong law of large numbers in the self-normalized form and the rate of divergence to infinity of the prior density around the origin. In particular we present prior densities ensuring the validity of Erdos-Feller-Kolmogorov-Petrowsky law of the iterated logarithm.

rate research

Read More

This short note provides a new and simple proof of the convergence rate for Pengs law of large numbers under sublinear expectations, which improves the corresponding results in Song [15] and Fang et al. [3].
Let $X$ be the branching particle diffusion corresponding to the operator $Lu+beta (u^{2}-u)$ on $Dsubseteq mathbb{R}^{d}$ (where $beta geq 0$ and $beta otequiv 0$). Let $lambda_{c}$ denote the generalized principal eigenvalue for the operator $L+beta $ on $D$ and assume that it is finite. When $lambda_{c}>0$ and $L+beta-lambda_{c}$ satisfies certain spectral theoretical conditions, we prove that the random measure $exp {-lambda_{c}t}X_{t}$ converges almost surely in the vague topology as $t$ tends to infinity. This result is motivated by a cluster of articles due to Asmussen and Hering dating from the mid-seventies as well as the more recent work concerning analogous results for superdiffusions of cite{ET,EW}. We extend significantly the results in cite{AH76,AH77} and include some key examples of the branching process literature. As far as the proofs are concerned, we appeal to modern techniques concerning martingales and `spine decompositions or `immortal particle pictures.
We consider a totally monotone capacity on a Polish space and a sequence of bounded p.i.i.d. random variables. We show that, on a full set, any cluster point of empirical averages lies between the lower and the upper Choquet integrals of the random variables, provided either the random variables or the capacity are continuous.
194 - Taiji Suzuki 2014
In this paper, we investigate the statistical convergence rate of a Bayesian low-rank tensor estimator. Our problem setting is the regression problem where a tensor structure underlying the data is estimated. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatio-temporal data analysis. The convergence rate is analyzed in terms of both in-sample and out-of-sample predictive accuracies. It is shown that a near optimal rate is achieved without any strong convexity of the observation. Moreover, we show that the method has adaptivity to the unknown rank of the true tensor, that is, the near optimal rate depending on the true rank is achieved even if it is not known a priori.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا