Do you want to publish a course? Click here

The best constant in the Khinchine inequality for slightly dependent random variables

72   0   0.0 ( 0 )
 Added by Susanna Spektor
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

We compute the best constant in the Khintchine inequality under assumption that the sum of Rademacher random variables is zero.



rate research

Read More

We consider the problem of bounding large deviations for non-i.i.d. random variables that are allowed to have arbitrary dependencies. Previous works typically assumed a specific dependence structure, namely the existence of independent components. Bounds that depend on the degree of dependence between the observations have only been studied in the theory of mixing processes, where variables are time-ordered. Here, we introduce a new way of measuring dependences within an unordered set of variables. We prove concentration inequalities, that apply to any set of random variables, but benefit from the presence of weak dependencies. We also discuss applications and extensions of our results to related problems of machine learning and large deviations.
99 - L. Bayon , P. Fortuny , J.M. Grau 2018
In this paper we consider two variants of the Secretary problem: The Best-or-Worst and the Postdoc problems. We extend previous work by considering that the number of objects is not known and follows either a discrete Uniform distribution $mathcal{U}[1,n]$ or a Poisson distribution $mathcal{P}(lambda)$. We show that in any case the optimal strategy is a threshold strategy, we provide the optimal cutoff values and the asymptotic probabilities of success. We also put our results in relation with closely related work.
We extend Fanos inequality, which controls the average probability of events in terms of the average of some $f$--divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary $[0,1]$--valued random variables, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning.
Given ${X_k}$ is a martingale difference sequence. And given another ${Y_k}$ which has dependency within the sequence. Assume ${X_k}$ is independent with ${Y_k}$, we study the properties of the sums of product of two sequences $sum_{k=1}^{n} X_k Y_k$. We obtain product-CLT, a modification of classical central limit theorem, which can be useful in the study of random projections. We also obtain the rate of convergence which is similar to the Berry-Essen theorem in the classical CLT.
The best constants of two kinds of discrete Sobolev inequalities on the C60 fullerene buckyball are obtained. All the eigenvalues of discrete Laplacian $A$ corresponding to the buckyball are found. They are roots of algebraic equation at most degree $4$ with integer coefficients. Green matrix $G(a)=(A+a I)^{-1} (0<a<infty)$ and the pseudo Green matrix $G_*=A^{dagger}$ are obtained by using computer software Mathematica. Diagonal values of $G_*$ and $G(a)$ are identical and they are equal to the best constants of discrete Sobolev inequalities.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا