Do you want to publish a course? Click here

A statistical physics of stationary and metastable states: description of the plasma column experimental data

132   0   0.0 ( 0 )
 Added by Alejandro Cabo
 Publication date 2009
  fields Physics
and research's language is English




Ask ChatGPT about the research

We propose a statistical mechanics for a general class of stationary and metastable equilibrium states. For this purpose, the Gibbs extremal conditions are slightly modified in order to be applied to a wide class of non-equilibrium states. As usual, it is assumed that the system maximizes the entropy functional $S$, subjected to the standard conditions; i.e., constant energy and normalization of the probability distribution. However, an extra conserved constraint function $F$ is also assumed to exist, which forces the system to remain in the metastable configuration. Further, after assuming additivity for two quasi-independent subsystems, and that the new constraint commutes with density matrix $rho$, it is argued that F should be an homogeneous function of the density matrix, at least for systems in which the spectrum is sufficiently dense to be considered as continuous. The explicit form of $F$ turns to be $F(p_{i})=p_{i}^{q}$, where $p_i$ are the eigenvalues of the density matrix and $q$ is a real number to be determined. This $q$ number appears as a kind of Tsallis parameter having the interpretation of the order of homogeneity of the constraint $F$. The procedure is applied to describe the results of the plasma experiment of Huang and Driscoll. The experimentally measured density is predicted with a similar precision as it is done with the use of the extremum of the enstrophy and Tsallis procedures. However, the present results define the density at all the radial positions. In particular, the smooth tail shown by the experimental distribution turns to be predicted by the procedure. In this way, the scheme avoids the non-analyticity of the density profile at large distances arising in both of the mentioned alternative procedures.



rate research

Read More

Phase diagrams are an invaluable tool for material synthesis and provide information on the phases of the material at any given thermodynamic condition. Conventional phase diagram generation involves experimentation to provide an initial estimate of thermodynamically accessible phases, followed by use of phenomenological models to interpolate between the available experimental data points and extrapolate to inaccessible regions. Such an approach, combined with first-principles calculations and data-mining techniques, has led to exhaustive thermodynamic databases albeit at distinct thermodynamic equilibria. In contrast, materials during their synthesis, operation, or processing, may not reach their thermodynamic equilibrium state but, instead, remain trapped in a local free energy minimum, that may exhibit desirable properties. Mapping these metastable phases and their thermodynamic behavior is highly desirable but currently lacking. Here, we introduce an automated workflow that integrates first principles physics and atomistic simulations with machine learning (ML), and high-performance computing to allow rapid exploration of the metastable phases of a given elemental composition. Using a representative material, carbon, with a vast number of metastable phases without parent in equilibrium, we demonstrate automatic mapping of hundreds of metastable states ranging from near equilibrium to those far-from-equilibrium. Moreover, we incorporate the free energy calculations into a neural-network-based learning of the equations of state that allows for construction of metastable phase diagrams. High temperature high pressure experiments using a diamond anvil cell on graphite sample coupled with high-resolution transmission electron microscopy are used to validate our metastable phase predictions. Our introduced approach is general and broadly applicable to single and multi-component systems.
144 - Lenka Zdeborova 2008
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named locked constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
In this paper, we look at four generalizations of the one dimensional Aubry-Andre-Harper (AAH) model which possess mobility edges. We map out a phase diagram in terms of population imbalance, and look at the system size dependence of the steady state imbalance. We find non-monotonic behaviour of imbalance with system parameters, which contradicts the idea that the relaxation of an initial imbalance is fixed only by the ratio of number of extended states to number of localized states. We propose that there exists dimensionless parameters, which depend on the fraction of single particle localized states, single particle extended states and the mean participation ratio of these states. These ingredients fully control the imbalance in the long time limit and we present numerical evidence of this claim. Among the four models considered, three of them have interesting duality relations and their location of mobility edges are known. One of the models (next nearest neighbour coupling) has no known duality but mobility edge exists and the model has been experimentally realized. Our findings are an important step forward to understanding non-equilibrium phenomena in a family of interesting models with incommensurate potentials.
We demonstrate that a two-dimensional finite and periodic array of Ising spins coupled via RKKY-like exchange can exhibit tunable magnetic states ranging from three distinct magnetic regimes: (1) a conventional ferromagnetic regime, (2) a glass-like regime, and (3) a new multi-well regime. These magnetic regimes can be tuned by one gate-like parameter, namely the ratio between the lattice constant and the oscillating interaction wavelength. We characterize the various magnetic regimes, quantifying the distribution of low energy states, aging relaxation dynamics, and scaling behavior. The glassy and multi-well behavior results from the competing character of the oscillating long-range exchange interactions. The multi-well structure features multiple attractors, each with a sizable basin of attraction. This may open the possible application of such atomic arrays as associative memories.
Identifying the relevant coarse-grained degrees of freedom in a complex physical system is a key stage in developing powerful effective theories in and out of equilibrium. The celebrated renormalization group provides a framework for this task, but its practical execution in unfamiliar systems is fraught with ad hoc choices, whereas machine learning approaches, though promising, often lack formal interpretability. Recently, the optimal coarse-graining in a statistical system was shown to exist, based on a universal, but computationally difficult information-theoretic variational principle. This limited its applicability to but the simplest systems; moreover, the relation to standard formalism of field theory was unclear. Here we present an algorithm employing state-of-art results in machine-learning-based estimation of information-theoretic quantities, overcoming these challenges. We use this advance to develop a new paradigm in identifying the most relevant field theory operators describing properties of the system, going beyond the existing approaches to real-space renormalization. We evidence its power on an interacting model, where the emergent degrees of freedom are qualitatively different from the microscopic building blocks of the theory. Our results push the boundary of formally interpretable applications of machine learning, conceptually paving the way towards automated theory building.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا