On the closedness and geometry of tensor network state sets


Abstract in English

Tensor network states (TNS) are a powerful approach for the study of strongly correlated quantum matter. The curse of dimensionality is addressed by parametrizing the many-body state in terms of a network of partially contracted tensors. These tensors form a substantially reduced set of effective degrees of freedom. In practical algorithms, functionals like energy expectation values or overlaps are optimized over certain sets of TNS. Concerning algorithmic stability, it is important whether the considered sets are closed because, otherwise, the algorithms may approach a boundary point that is outside the TNS set and tensor elements diverge. We discuss the closedness and geometries of TNS sets, and we propose regularizations for optimization problems on non-closed TNS sets. We show that sets of matrix product states (MPS) with open boundary conditions, tree tensor network states (TTNS), and the multiscale entanglement renormalization ansatz (MERA) are always closed, whereas sets of translation-invariant MPS with periodic boundary conditions (PBC), heterogeneous MPS with PBC, and projected entangled pair states (PEPS) are generally not closed. The latter is done using explicit examples like the W state, states that we call two-domain states, and fine-grain

Download