We apply Feshbach-Krein-Schur renormalization techniques in the hierarchical Anderson model to establish a criterion on the single-site distribution which ensures exponential dynamical localization as well as positive inverse participation ratios and Poisson statistics of eigenvalues. Our criterion applies to all cases of exponentially decaying hierarchical hopping strengths and holds even for spectral dimension $d > 2$, which corresponds to the regime of transience of the underlying hierarchical random walk. This challenges recent numerical findings that the spectral dimension is significant as far as the Anderson transition is concerned.
We prove localization and probabilistic bounds on the minimum level spacing for the Anderson tight-binding model on the lattice in any dimension, with single-site potential having a discrete distribution taking N values, with N large.
A new KAM-style proof of Anderson localization is obtained. A sequence of local rotations is defined, such that off-diagonal matrix elements of the Hamiltonian are driven rapidly to zero. This leads to the first proof via multi-scale analysis of exponential decay of the eigenfunction correlator (this implies strong dynamical localization). The method has been used in recent work on many-body localization [arXiv:1403.7837].
We prove localization and probabilistic bounds on the minimum level spacing for a random block Anderson model without monotonicity. Using a sequence of narrowing energy windows and associated Schur complements, we obtain detailed probabilistic information about the microscopic structure of energy levels of the Hamiltonian, as well as the support and decay of eigenfunctions.
Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate information at different scales of images with disentangled representations at each scale. We demonstrate our method mainly on the CelebA dataset and show that the disentangled representations at different scales enable semantic manipulation and style mixing of the images. To visualize the latent representations, we introduce receptive fields for flow-based models and find that the receptive fields learned by RG-Flow are similar to those in convolutional neural networks. In addition, we replace the widely adopted Gaussian prior distribution by a sparse prior distribution to further enhance the disentanglement of representations. From a theoretical perspective, the proposed method has $O(log L)$ complexity for image inpainting compared to previous generative models with $O(L^2)$ complexity.
We use trace class scattering theory to exclude the possibility of absolutely continuous spectrum in a large class of self-adjoint operators with an underlying hierarchical structure and provide applications to certain random hierarchical operators and matrices. We proceed to contrast the localizing effect of the hierarchical structure in the deterministic setting with previous results and conjectures in the random setting. Furthermore, we survey stronger localization statements truly exploiting the disorder for the hierarchical Anderson model and report recent results concerning the spectral statistics of the ultrametric random matrix ensemble.