Do you want to publish a course? Click here

Obtaining Measure Concentration from Markov Contraction

131   0   0.0 ( 0 )
 Added by Aryeh Kontorovich
 Publication date 2012
  fields
and research's language is English




Ask ChatGPT about the research

Concentration bounds for non-product, non-Haar measures are fairly recent: the first such result was obtained for contracting Markov chains by Marton in 1996 via the coupling method. The work that followed, with few exceptions, also used coupling. Although this technique is of unquestionable utility as a theoretical tool, it is not always simple to apply. As an alternative to coupling, we use the elementary Markov contraction lemma to obtain simple, useful, and apparently novel concentration results for various Markov-type processes. Our technique consists of expressing probabilities as matrix products and applying Markov contraction to these expressions; thus it is fairly general and holds the potential to yield further results in this vein.



rate research

Read More

125 - Djalil Chafai 2009
Mixtures are convex combinations of laws. Despite this simple definition, a mixture can be far more subtle than its mixed components. For instance, mixing Gaussian laws may produce a potential with multiple deep wells. We study in the present work fine properties of mixtures with respect to concentration of measure and Sobolev type functional inequalities. We provide sharp Laplace bounds for Lipschitz functions in the case of generic mixtures, involving a transportation cost diameter of the mixed family. Additionally, our analysis of Sobolev type inequalities for two-component mixtures reveals natural relations with some kind of band isoperimetry and support constrained interpolation via mass transportation. We show that the Poincare constant of a two-component mixture may remain bounded as the mixture proportion goes to 0 or 1 while the logarithmic Sobolev constant may surprisingly blow up. This counter-intuitive result is not reducible to support disconnections, and appears as a reminiscence of the variance-entropy comparison on the two-point space. As far as mixtures are concerned, the logarithmic Sobolev inequality is less stable than the Poincare inequality and the sub-Gaussian concentration for Lipschitz functions. We illustrate our results on a gallery of concrete two-component mixtures. This work leads to many open questions.
170 - J.-R. Chazottes , F. Redig 2010
We obtain moment and Gaussian bounds for general Lipschitz functions evaluated along the sample path of a Markov chain. We treat Markov chains on general (possibly unbounded) state spaces via a coupling method. If the first moment of the coupling time exists, then we obtain a variance inequality. If a moment of order 1+epsilon of the coupling time exists, then depending on the behavior of the stationary distribution, we obtain higher moment bounds. This immediately implies polynomial concentration inequalities. In the case that a moment of order 1+epsilon is finite uniformly in the starting point of the coupling, we obtain a Gaussian bound. We illustrate the general results with house of cards processes, in which both uniform and non-uniform behavior of moments of the coupling time can occur.
A central tool in the study of nonhomogeneous random matrices, the noncommutative Khintchine inequality of Lust-Piquard and Pisier, yields a nonasymptotic bound on the spectral norm of general Gaussian random matrices $X=sum_i g_i A_i$ where $g_i$ are independent standard Gaussian variables and $A_i$ are matrix coefficients. This bound exhibits a logarithmic dependence on dimension that is sharp when the matrices $A_i$ commute, but often proves to be suboptimal in the presence of noncommutativity. In this paper, we develop nonasymptotic bounds on the spectrum of arbitrary Gaussian random matrices that can capture noncommutativity. These bounds quantify the degree to which the deterministic matrices $A_i$ behave as though they are freely independent. This intrinsic freeness phenomenon provides a powerful tool for the study of various questions that are outside the reach of classical methods of random matrix theory. Our nonasymptotic bounds are easily applicable in concrete situations, and yield sharp results in examples where the noncommutative Khintchine inequality is suboptimal. When combined with a linearization argument, our bounds imply strong asymptotic freeness (in the sense of Haagerup-Thorbj{o}rnsen) for a remarkably general class of Gaussian random matrix models, including matrices that may be very sparse and that lack any special symmetries. Beyond the Gaussian setting, we develop matrix concentration inequalities that capture noncommutativity for general sums of independent random matrices, which arise in many problems of pure and applied mathematics.
We consider a piecewise-deterministic Markov process (PDMP) with general conditional distribution of inter-occurrence time, which is called a general PDMP here. Our purpose is to establish the theory of measure-valued generator for general PDMPs. The additive functional of a semi-dynamic system (SDS) is introduced firstly, which presents us an analytic tool for the whole paper. The additive functionals of a general PDMP are represented in terms of additive functionals of the SDS. The necessary and sufficient conditions of being a local martingale or a special semimartingale for them are given. The measure-valued generator for a general PDMP is introduced, which takes value in the space of additive functionals of the SDS. And its domain is completely described by analytic conditions. The domain is extended to the locally (path-)finite variation functions. As an application of measure-valued generator, we study the expected cumulative discounted value of an additive functional of the general PDMP, and get a measure integro-differential equation satisfied by the expected cumulative discounted value function.
We study a sequence of symmetric $n$-player stochastic differential games driven by both idiosyncratic and common sources of noise, in which players interact with each other through their empirical distribution. The unique Nash equilibrium empirical measure of the $n$-player game is known to converge, as $n$ goes to infinity, to the unique equilibrium of an associated mean field game. Under suitable regularity conditions, in the absence of common noise, we complement this law of large numbers result with non-asymptotic concentration bounds for the Wasserstein distance between the $n$-player Nash equilibrium empirical measure and the mean field equilibrium. We also show that the sequence of Nash equilibrium empirical measures satisfies a weak large deviation principle, which can be strengthened to a full large deviation principle only in the absence of common noise. For both sets of results, we first use the master equation, an infinite-dimensional partial differential equation that characterizes the value function of the mean field game, to construct an associated McKean-Vlasov interacting $n$-particle system that is exponentially close to the Nash equilibrium dynamics of the $n$-player game for large $n$, by refining estimates obtained in our companion paper. Then we establish a weak large deviation principle for McKean-Vlasov systems in the presence of common noise. In the absence of common noise, we upgrade this to a full large deviation principle and obtain new concentration estimates for McKean-Vlasov systems. Finally, in two specific examples that do not satisfy the assumptions of our main theorems, we show how to adapt our methodology to establish large deviations and concentration results.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا