We provide a framework for learning of dynamical systems rooted in the concept of representations and Koopman operators. The interplay between the two leads to the full description of systems that can be represented linearly in a finite dimension, based on the properties of the Koopman operator spectrum. The geometry of state space is connected to the notion of representation, both in the linear case - where it is related to joint level sets of eigenfunctions - and in the nonlinear representation case. As shown here, even nonlinear finite-dimensional representations can be learned using the Koopman operator framework, leading to a new class of representation eigenproblems. The connection to learning using neural networks is given. An extension of the Koopman operator theory to static maps between different spaces is provided. The effect of the Koopman operator spectrum on Mori-Zwanzig type representations is discussed.