ترغب بنشر مسار تعليمي؟ اضغط هنا

We consider uniformly random lozenge tilings of simply connected polygons subject to a technical assumption on their limit shape. We show that the edge statistics around any point on the arctic boundary, that is not a cusp or tangency location, conve rge to the Airy line ensemble. Our proof proceeds by locally comparing these edge statistics with those for a random tiling of a hexagon, which are well understood. To realize this comparison, we require a nearly optimal concentration estimate for the tiling height function, which we establish by exhibiting a certain Markov chain on the set of all tilings that preserves such concentration estimates under its dynamics.
134 - Jiaoyang Huang 2021
In this paper we study uniformly random lozenge tilings of strip domains. Under the assumption that the limiting arctic boundary has at most one cusp, we prove a nearly optimal concentration estimate for the tiling height functions and arctic boundar ies on such domains: with overwhelming probability the tiling height function is within $n^delta$ of its limit shape, and the tiling arctic boundary is within $n^{1/3+delta}$ to its limit shape, for arbitrarily small $delta>0$. This concentration result will be used in [AH21] to prove that the edge statistics of simply-connected polygonal domains, subject to a technical assumption on their limit shape, converge to the Airy line ensemble.
In this article we study the Dyson Bessel process, which describes the evolution of singular values of rectangular matrix Brownian motions, and prove a large deviation principle for its empirical particle density. We then use it to obtain the asympto tics of the so-called rectangular spherical integrals as $m,n$ go to infinity while $m/n$ converges.
The unscented Kalman inversion (UKI) method presented in [1] is a general derivative-free approach for the inverse problem. UKI is particularly suitable for inverse problems where the forward model is given as a black box and may not be differentiabl e. The regularization strategies, convergence property, and speed-up strategies [1,2] of the UKI are thoroughly studied, and the method is capable of handling noisy observation data and solving chaotic inverse problems. In this paper, we study the uncertainty quantification capability of the UKI. We propose a modified UKI, which allows to well approximate the mean and covariance of the posterior distribution for well-posed inverse problems with large observation data. Theoretical guarantees for both linear and nonlinear inverse problems are presented. Numerical results, including learning of permeability parameters in subsurface flow and of the Navier-Stokes initial condition from solution data at positive times are presented. The results obtained by the UKI require only $O(10)$ iterations, and match well with the expected results obtained by the Markov Chain Monte Carlo method.
The unscented Kalman inversion (UKI) presented in [1] is a general derivative-free approach to solving the inverse problem. UKI is particularly suitable for inverse problems where the forward model is given as a black box and may not be differentiabl e. The regularization strategy and convergence property of the UKI are thoroughly studied, and the method is demonstrated effectively handling noisy observation data and solving chaotic inverse problems. In this paper, we aim to make the UKI more efficient in terms of computational and memory costs for large scale inverse problems. We take advantages of the low-rank covariance structure to reduce the number of forward problem evaluations and the memory cost, related to the need to propagate large covariance matrices. And we leverage reduced-order model techniques to further speed up these forward evaluations. The effectiveness of the enhanced UKI is demonstrated on a barotropic model inverse problem with O($10^5$) unknown parameters and a 3D generalized circulation model (GCM) inverse problem, where each iteration is as efficient as that of gradient-based optimization methods.
Consider the normalized adjacency matrices of random $d$-regular graphs on $N$ vertices with fixed degree $dgeq3$. We prove that, with probability $1-N^{-1+{varepsilon}}$ for any ${varepsilon} >0$, the following two properties hold as $N to infty$ pr ovided that $dgeq3$: (i) The eigenvalues are close to the classical eigenvalue locations given by the Kesten-McKay distribution. In particular, the extremal eigenvalues are concentrated with polynomial error bound in $N$, i.e. $lambda_2, |lambda_N|leq 2+N^{-c}$. (ii) All eigenvectors of random $d$-regular graphs are completely delocalized.
In this paper, we study the power iteration algorithm for the spiked tensor model, as introduced in [44]. We give necessary and sufficient conditions for the convergence of the power iteration algorithm. When the power iteration algorithm converges, for the rank one spiked tensor model, we show the estimators for the spike strength and linear functionals of the signal are asymptotically Gaussian; for the multi-rank spiked tensor model, we show the estimators are asymptotically mixtures of Gaussian. This new phenomenon is different from the spiked matrix model. Using these asymptotic results of our estimators, we construct valid and efficient confidence intervals for spike strengths and linear functionals of the signals.
119 - Jiaoyang Huang 2020
In this paper we study fluctuations of extreme particles of nonintersecting Brownian bridges starting from $a_1leq a_2leq cdots leq a_n$ at time $t=0$ and ending at $b_1leq b_2leq cdotsleq b_n$ at time $t=1$, where $mu_{A_n}=(1/n)sum_{i}delta_{a_i}, mu_{B_n}=(1/n)sum_i delta_{b_i}$ are discretization of probability measures $mu_A, mu_B$. Under regularity assumptions of $mu_A, mu_B$, we show as the number of particles $n$ goes to infinity, fluctuations of extreme particles at any time $0<t<1$, after proper rescaling, are asymptotically universal, converging to the Airy point process.
87 - Jiaoyang Huang 2020
In this paper we study height fluctuations of random lozenge tilings of polygonal domains on the triangular lattice through nonintersecting Bernoulli random walks. For a large class of polygons which have exactly one horizontal upper boundary edge, w e show that these random height functions converge to a Gaussian Free Field as predicted by Kenyon and Okounkov [28]. A key ingredient of our proof is a dynamical version of the discrete loop equations as introduced by Borodin, Guionnet and Gorin [5], which might be of independent interest.
An acknowledged weakness of neural networks is their vulnerability to adversarial perturbations to the inputs. To improve the robustness of these models, one of the most popular defense mechanisms is to alternatively maximize the loss over the constr ained perturbations (or called adversaries) on the inputs using projected gradient ascent and minimize over weights. In this paper, we analyze the dynamics of the maximization step towards understanding the experimentally observed effectiveness of this defense mechanism. Specifically, we investigate the non-concave landscape of the adversaries for a two-layer neural network with a quadratic loss. Our main result proves that projected gradient ascent finds a local maximum of this non-concave problem in a polynomial number of iterations with high probability. To our knowledge, this is the first work that provides a convergence analysis of the first-order adversaries. Moreover, our analysis demonstrates that, in the initial phase of adversarial training, the scale of the inputs matters in the sense that a smaller input scale leads to faster convergence of adversarial training and a more regular landscape. Finally, we show that these theoretical findings are in excellent agreement with a series of experiments.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا