ترغب بنشر مسار تعليمي؟ اضغط هنا

Is ergodicity a reasonable hypothesis?

42   0   0.0 ( 0 )
 نشر من قبل L. S. Schulman
 تاريخ النشر 2014
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

In the physics literature ergodicity is taken to mean that a system, including a macroscopic one, visits all microscopic states in a relatively short time. We show that this is an impossibility even if that time is billions of years. We also suggest that this feature does not contradict most physical considerations since those considerations deal with correlations of only a few particles.

قيم البحث

اقرأ أيضاً

Whether the number of beta-steps in the lambda-calculus can be taken as a reasonable time cost model (that is, polynomially related to the one of Turing machines) is a delicate problem, which depends on the notion of evaluation strategy. Since the ni neties, it is known that weak (that is, out of abstractions) call-by-value evaluation is a reasonable strategy while Levys optimal parallel strategy, which is strong (that is, it reduces everywhere), is not. The strong case turned out to be subtler than the weak one. In 2014 Accattoli and Dal Lago have shown that strong call-by-name is reasonable, by introducing a new form of useful sharing and, later, an abstract machine with an overhead quadratic in the number of beta-steps. Here we show that also strong call-by-value evaluation is reasonable for time, via a new abstract machine realizing useful sharing and having a linear overhead. Moreover, our machine uses a new mix of sharing techniques, adding on top of useful sharing a form of implosive sharing, which on some terms brings an exponential speed-up. We give examples of families that the machine executes in time logarithmic in the number of beta-steps.
129 - Zhihao Lan , Stephen Powell 2017
We use exact diagonalization to study the eigenstate thermalization hypothesis (ETH) in the quantum dimer model on the square and triangular lattices. Due to the nonergodicity of the local plaquette-flip dynamics, the Hilbert space, which consists of highly constrained close-packed dimer configurations, splits into sectors characterized by topological invariants. We show that this has important consequences for ETH: We find that ETH is clearly satisfied only when each topological sector is treated separately, and only for moderate ratios of the potential and kinetic terms in the Hamiltonian. By contrast, when the spectrum is treated as a whole, ETH breaks down on the square lattice, and apparently also on the triangular lattice. These results demonstrate that quantum dimer models have interesting thermalization dynamics.
We revisit the question of describing critical spin systems and field theories using matrix product states, and formulate a scaling hypothesis in terms of operators, eigenvalues of the transfer matrix, and lattice spacing in the case of field theorie s. Critical exponents and central charge are determined by optimizing the exponents such as to obtain a data collapse. We benchmark this method by studying critical Ising and Potts models, where we also obtain a scaling ansatz for the correlation length and entanglement entropy. The formulation of those scaling functions turns out to be crucial for studying critical quantum field theories on the lattice. For the case of $lambdaphi^4$ with mass $mu^2$ and lattice spacing $a$, we demonstrate a double data collapse for the correlation length $ delta xi(mu,lambda,D)=tilde{xi} left((alpha-alpha_c)(delta/a)^{-1/ u}right)$ with $D$ the bond dimension, $delta$ the gap between eigenvalues of the transfer matrix, and $alpha_c=mu_R^2/lambda$ the parameter which fixes the critical quantum field theory.
Surface force apparatus (SFA) and atomic force microscopy (AFM) can measure a force curve between a substrate and a probe in liquid. However, the force curve had not been transformed to the number density distribution of solvent molecules (colloidal particles) on the substance due to the absence of such a transform theory. Recently, we proposed and developed the transform theories for SFA and AFM. In these theories, the force curve is transformed to the pressure between two flat walls. Next, the pressure is transformed to number density distribution of solvent molecules (colloidal particles). However, pair potential between the solvent molecule (colloidal particle) and the wall is needed as the input of the calculation and Kirkwood superposition approximation is used in the previous theories. In this letter, we propose a new theory that does not use both the pair potential and the approximation. Instead, it makes use of a structure factor between solvent molecules (colloidal particles) which can be obtained by X-ray or neutron scattering.
We present a method for determining the free energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Converge nce and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of alanine dipeptide free energy landscape.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا