No Arabic abstract
We numerically investigate quantum quenches of a nonintegrable hard-core Bose-Hubbard model to test the accuracy of the microcanonical ensemble in small isolated quantum systems. We show that, in a certain range of system size, the accuracy increases with the dimension of the Hilbert space $D$ as $1/D$. We ascribe this rapid improvement to the absence of correlations between many-body energy eigenstates as well as to the eigenstate thermalization. Outside of that range, the accuracy is found to scale as $1/sqrt{D}$ and improves algebraically with the system size.
The generalized Gibbs ensemble (GGE), which involves multiple conserved quantities other than the Hamiltonian, has served as the statistical-mechanical description of the long-time behavior for several isolated integrable quantum systems. The GGE may involve a noncommutative set of conserved quantities in view of the maximum entropy principle, and show that the GGE thus generalized (noncommutative GGE, NCGGE) gives a more qualitatively accurate description of the long-time behaviors than that of the conventional GGE. Providing a clear understanding of why the (NC)GGE well describes the long-time behaviors, we construct, for noninteracting models, the exact NCGGE that describes the long-time behaviors without an error even at finite system size. It is noteworthy that the NCGGE involves nonlocal conserved quantities, which can be necessary for describing long-time behaviors of local observables. We also give some extensions of the NCGGE and demonstrate how accurately they describe the long-time behaviors of few-body observables.
We develop a scaling theory for the finite-size critical behavior of the microcanonical entropy (density of states) of a system with a critically-divergent heat capacity. The link between the microcanonical entropy and the canonical energy distribution is exploited to establish the former, and corroborate its predicted scaling form, in the case of the 3d Ising universality class. We show that the scaling behavior emerges clearly when one accounts for the effects of the negative background constant contribution to the canonical critical specific heat. We show that this same constant plays a significant role in determining the observed differences between the canonical and microcanonical specific heats of systems of finite size, in the critical region.
Evaporation/condensation transition of the Potts model on square lattice is numerically investigated by the Wang-Landau sampling method. Intrinsically system size dependent discrete transition between supersaturation state and phase-separation state is observed in the microcanonical ensemble by changing constrained internal energy. We calculate the microcanonical temperature, as a derivative of microcanonical entropy, and condensation ratio, and perform a finite size scaling of them to indicate clear tendency of numerical data to converge to the infinite size limit predicted by phenomenological theory for the isotherm lattice gas model.
With the recent detection of cosmic shear, the most challenging effect of weak gravitational lensing has been observed. The main difficulties for this detection were the need for a large amount of high quality data and the control of systematics during the gravitational shear measurement process, in particular those coming from the Point Spread Function anisotropy. In this paper we perform detailed simulations with the state-of-the-art algorithm developed by Kaiser, Squires and Broadhurst (KSB) to measure gravitational shear. We show that for realistic PSF profiles the KSB algorithm can recover any shear amplitude in the range $0.012 < |gammavec |<0.32$ with a relative, systematic error of $10-15%$. We give quantitative limits on the PSF correction method as a function of shear strength, object size, signal-to-noise and PSF anisotropy amplitude, and we provide an automatic procedure to get a reliable object catalog for shear measurements out of the raw images.
We examine the question of whether quantum mechanics places limitations on the ability of small quantum devices to learn. We specifically examine the question in the context of Bayesian inference, wherein the prior and posterior distributions are encoded in the quantum state vector. We conclude based on lower bounds from Grovers search that an efficient blackbox method for updating the distribution is impossible. We then address this by providing a new adaptive form of approximate quantum Bayesian inference that is polynomially faster than its classical analogue and tractable if the quantum system is augmented with classical memory or if the low-order moments of the distribution are protected using a repetition code. This work suggests that there may be a connection between fault tolerance and the capacity of a quantum system to learn from its surroundings.