No Arabic abstract
It is well known that the product of two independent regularly varying random variables with the same tail index is again regularly varying with this index. In this paper, we provide sharp sufficient conditions for the regular variation property of product-type functions of regularly varying random vectors, generalizing and extending the univariate theory in various directions. The main result is then applied to characterize the regular variation property of products of iid regularly varying quadratic random matrices and of solutions to affine stochastic recurrence equations under non-standard conditions.
In this article, we consider a Branching Random Walk (BRW) on the real line where the underlying genealogical structure is given through a supercritical branching process in i.i.d. environment and satisfies Kesten-Stigum condition. The displacements coming from the same parent are assumed to have jointly regularly varying tails. Conditioned on the survival of the underlying genealogical tree, we prove that the appropriately normalized (depends on the expected size of the $n$-th generation given the environment) maximum among positions at the $n$-th generation converges weakly to a scale-mixture of Frech{e}t random variable. Furthermore, we derive the weak limit of the extremal processes composed of appropriately scaled positions at the $n$-th generation and show that the limit point process is a member of the randomly scaled scale-decorated Poisson point processes (SScDPPP). Hence, an analog of the predictions by Brunet and Derrida (2011) holds.
In this paper we address the problem of rare-event simulation for heavy-tailed Levy processes with infinite activities. We propose a strongly efficient importance sampling algorithm that builds upon the sample path large deviations for heavy-tailed Levy processes, stick-breaking approximation of extrema of Levy processes, and the randomized debiasing Monte Carlo scheme. The proposed importance sampling algorithm can be applied to a broad class of Levy processes and exhibits significant improvements in efficiency when compared to crude Monte-Carlo method in our numerical experiments.
Linear regression with the classical normality assumption for the error distribution may lead to an undesirable posterior inference of regression coefficients due to the potential outliers. This paper considers the finite mixture of two components with thin and heavy tails as the error distribution, which has been routinely employed in applied statistics. For the heavily-tailed component, we introduce the novel class of distributions; their densities are log-regularly varying and have heavier tails than those of Cauchy distribution, yet they are expressed as a scale mixture of normal distributions and enable the efficient posterior inference by Gibbs sampler. We prove the robustness to outliers of the posterior distributions under the proposed models with a minimal set of assumptions, which justifies the use of shrinkage priors with unbounded densities for the coefficient vector in the presence of outliers. The extensive comparison with the existing methods via simulation study shows the improved performance of our model in point and interval estimation, as well as its computational efficiency. Further, we confirm the posterior robustness of our method in the empirical study with the shrinkage priors for regression coefficients.
Distance-preserving mappings (DPMs) are mappings from the set of all q-ary vectors of a fixed length to the set of permutations of the same or longer length such that every two distinct vectors are mapped to permutations with the same or even larger Hamming distance than that of the vectors. In this paper, we propose a construction of DPMs from ternary vectors. The constructed DPMs improve the lower bounds on the maximal size of permutation arrays.
Cluster indices describe extremal behaviour of stationary time series. We consider runs estimators of cluster indices. Using a modern theory of multivariate, regularly varying time series, we obtain central limit theorems under conditions that can be easily verified for a large class of models. In particular, we show that blocks and runs estimators have the same limiting variance.