Do you want to publish a course? Click here

On local convexity in $mathbb{L}^0$ and switching probability measures

99   0   0.0 ( 0 )
 Added by Niushan Gao
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

In the paper, we investigate the following fundamental question. For a set $mathcal{K}$ in $mathbb{L}^0(mathbb{P})$, when does there exist an equivalent probability measure $mathbb{Q}$ such that $mathcal{K}$ is uniformly integrable in $mathbb{L}^1(mathbb{Q})$. Specifically, let $mathcal{K}$ be a convex bounded positive set in $mathbb{L}^1(mathbb{P})$. Kardaras [6] asked the following two questions: (1) If the relative $mathbb{L}^0(mathbb{P})$-topology is locally convex on $mathcal{K}$, does there exist $mathbb{Q}sim mathbb{P}$ such that the $mathbb{L}^0(mathbb{Q})$- and $mathbb{L}^1(mathbb{Q})$-topologies agree on ${mathcal{K}}$? (2) If $mathcal{K}$ is closed in the $mathbb{L}^0(mathbb{P})$-topology and there exists $mathbb{Q}sim mathbb{P}$ such that the $mathbb{L}^0(mathbb{Q})$- and $mathbb{L}^1(mathbb{Q})$-topologies agree on $mathcal{K}$, does there exist $mathbb{Q}sim mathbb{P}$ such that $mathcal{K}$ is $mathbb{Q}$-uniformly integrable? In the paper, we show that, no matter $mathcal{K}$ is positive or not, the first question has a negative answer in general and the second one has a positive answer. In addition to answering these questions, we establish probabilistic and topological characterizations of existence of $mathbb{Q}simmathbb{P}$ satisfying these desired properties. We also investigate the peculiar effects of $mathcal{K}$ being positive.



rate research

Read More

135 - Nathael Gozlan 2012
We introduce the notion of an interpolating path on the set of probability measures on finite graphs. Using this notion, we first prove a displacement convexity property of entropy along such a path and derive Prekopa-Leindler type inequalities, a Talagrand transport-entropy inequality, certain HWI type as well as log-Sobolev type inequalities in discrete settings. To illustrate through examples, we apply our results to the complete graph and to the hypercube for which our results are optimal -- by passing to the limit, we recover the classical log-Sobolev inequality for the standard Gaussian measure with the optimal constant.
A central tool in the study of nonhomogeneous random matrices, the noncommutative Khintchine inequality of Lust-Piquard and Pisier, yields a nonasymptotic bound on the spectral norm of general Gaussian random matrices $X=sum_i g_i A_i$ where $g_i$ are independent standard Gaussian variables and $A_i$ are matrix coefficients. This bound exhibits a logarithmic dependence on dimension that is sharp when the matrices $A_i$ commute, but often proves to be suboptimal in the presence of noncommutativity. In this paper, we develop nonasymptotic bounds on the spectrum of arbitrary Gaussian random matrices that can capture noncommutativity. These bounds quantify the degree to which the deterministic matrices $A_i$ behave as though they are freely independent. This intrinsic freeness phenomenon provides a powerful tool for the study of various questions that are outside the reach of classical methods of random matrix theory. Our nonasymptotic bounds are easily applicable in concrete situations, and yield sharp results in examples where the noncommutative Khintchine inequality is suboptimal. When combined with a linearization argument, our bounds imply strong asymptotic freeness (in the sense of Haagerup-Thorbj{o}rnsen) for a remarkably general class of Gaussian random matrix models, including matrices that may be very sparse and that lack any special symmetries. Beyond the Gaussian setting, we develop matrix concentration inequalities that capture noncommutativity for general sums of independent random matrices, which arise in many problems of pure and applied mathematics.
This paper concerns the approximation of probability measures on $mathbf{R}^d$ with respect to the Kullback-Leibler divergence. Given an admissible target measure, we show the existence of the best approximation, with respect to this divergence, from certain sets of Gaussian measures and Gaussian mixtures. The asymptotic behavior of such best approximations is then studied in the small parameter limit where the measure concentrates; this asymptotic behaviour is characterized using $Gamma$-convergence. The theory developed is then applied to understanding the frequentist consistency of Bayesian inverse problems. For a fixed realization of noise, we show the asymptotic normality of the posterior measure in the small noise limit. Taking into account the randomness of the noise, we prove a Bernstein-Von Mises type result for the posterior measure.
337 - Patrick Cattiaux 2018
The goal of this paper is to push forward the study of those properties of log-concave measures that help to estimate their Poincar{e} constant. First we revisit E. Milmans result [40] on the link between weak (Poincar{e} or concentration) inequalities and Cheegers inequality in the logconcave cases, in particular extending localization ideas and a result of Latala, as well as providing a simpler proof of the nice Poincar{e} (dimensional) bound in the inconditional case. Then we prove alternative transference principle by concentration or using various distances (total variation, Wasserstein). A mollification procedure is also introduced enabling, in the logconcave case, to reduce to the case of the Poincar{e} inequality for the mollified measure. We finally complete the transference section by the comparison of various probability metrics (Fortet-Mourier, bounded-Lipschitz,...).
Let $p(cdot): mathbb R^nto(0,infty)$ be a variable exponent function satisfying the globally log-Holder continuous condition. In this article, the authors first obtain a decomposition for any distribution of the variable weak Hardy space into good and bad parts and then prove the following real interpolation theorem between the variable Hardy space $H^{p(cdot)}(mathbb R^n)$ and the space $L^{infty}(mathbb R^n)$: begin{equation*} (H^{p(cdot)}(mathbb R^n),L^{infty}(mathbb R^n))_{theta,infty} =W!H^{p(cdot)/(1-theta)}(mathbb R^n),quad thetain(0,1), end{equation*} where $W!H^{p(cdot)/(1-theta)}(mathbb R^n)$ denotes the variable weak Hardy space. As an application, the variable weak Hardy space $W!H^{p(cdot)}(mathbb R^n)$ with $p_-:=mathopmathrm{ess,inf}_{xinrn}p(x)in(1,infty)$ is proved to coincide with the variable Lebesgue space $W!L^{p(cdot)}(mathbb R^n)$.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا