Do you want to publish a course? Click here

1--Meixner random vectors

113   0   0.0 ( 0 )
 Added by Aurel Stan
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

A definition of $d$--dimensional $n$--Meixner random vectors is given first. This definition involves the commutators of their semi--quantum operators. After that we will focus on the $1$-Meixner random vectors, and derive a system of $d$ partial differential equations satisfied by their Laplace transform. We provide a set of necessary conditions for this system to be integrable. We use these conditions to give a complete characterization of all non--degenerate three--dimensional $1$--Meixner random vectors. It must be mentioned that the three--dimensional case produces the first example in which the components of a $1$--Meixner random vector cannot be reduced, via an injective linear transformation, to three independent classic Meixner random variables.



rate research

Read More

156 - Elizabeth Meckes 2009
Let $X$ be a $d$-dimensional random vector and $X_theta$ its projection onto the span of a set of orthonormal vectors ${theta_1,...,theta_k}$. Conditions on the distribution of $X$ are given such that if $theta$ is chosen according to Haar measure on the Stiefel manifold, the bounded-Lipschitz distance from $X_theta$ to a Gaussian distribution is concentrated at its expectation; furthermore, an explicit bound is given for the expected distance, in terms of $d$, $k$, and the distribution of $X$, allowing consideration not just of fixed $k$ but of $k$ growing with $d$. The results are applied in the setting of projection pursuit, showing that most $k$-dimensional projections of $n$ data points in $R^d$ are close to Gaussian, when $n$ and $d$ are large and $k=csqrt{log(d)}$ for a small constant $c$.
343 - Xinjia Chen 2013
We derive simple concentration inequalities for bounded random vectors, which generalize Hoeffdings inequalities for bounded scalar random variables. As applications, we apply the general results to multinomial and Dirichlet distributions to obtain multivariate concentration inequalities.
We prove convex ordering results for random vectors admitting a predictable representation in terms of a Brownian motion and a non-necessarily independent jump component. Our method uses forward-backward stochastic calculus and extends previous results in the one-dimensional case. We also study a geometric interpretation of convex ordering for discrete measures in connection with the conditions set on the jump heights and intensities of the considered processes.
Adaptive Monte Carlo methods are very efficient techniques designed to tune simulation estimators on-line. In this work, we present an alternative to stochastic approximation to tune the optimal change of measure in the context of importance sampling for normal random vectors. Unlike stochastic approximation, which requires very fine tuning in practice, we propose to use sample average approximation and deterministic optimization techniques to devise a robust and fully automatic variance reduction methodology. The same samples are used in the sample optimization of the importance sampling parameter and in the Monte Carlo computation of the expectation of interest with the optimal measure computed in the previous step. We prove that this highly dependent Monte Carlo estimator is convergent and satisfies a central limit theorem with the optimal limiting variance. Numerical experiments confirm the performance of this estimator: in comparison with the crude Monte Carlo method, the computation time needed to achieve a given precision is divided by a factor between 3 and 15.
Given a branching random walk on a set $X$, we study its extinction probability vectors $mathbf q(cdot,A)$. Their components are the probability that the process goes extinct in a fixed $Asubseteq X$, when starting from a vertex $xin X$. The set of extinction probability vectors (obtained letting $A$ vary among all subsets of $X$) is a subset of the set of the fixed points of the generating function of the branching random walk. In particular here we are interested in the cardinality of the set of extinction probability vectors. We prove results which allow to understand whether the probability of extinction in a set $A$ is different from the one of extinction in another set $B$. In many cases there are only two possible extinction probability vectors and so far, in more complicated examples, only a finite number of distinct extinction probability vectors had been explicitly found. Whether a branching random walk could have an infinite number of distinct extinction probability vectors was not known. We apply our results to construct examples of branching random walks with uncountably many distinct extinction probability vectors.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا