We present several results relating to the contraction of generic tensor networks and discuss their application to the simulation of quantum many-body systems using variational approaches based upon tensor network states. Given a closed tensor network $mathcal{T}$, we prove that if the environment of a single tensor from the network can be evaluated with computational cost $kappa$, then the environment of any other tensor from $mathcal{T}$ can be evaluated with identical cost $kappa$. Moreover, we describe how the set of all single tensor environments from $mathcal{T}$ can be simultaneously evaluated with fixed cost $3kappa$. The usefulness of these results, which are applicable to a variety of tensor network methods, is demonstrated for the optimization of a Multi-scale Entanglement Renormalization Ansatz (MERA) for the ground state of a 1D quantum system, where they are shown to substantially reduce the computation time.