The deployment of federated learning in a wireless network, called federated edge learning (FEEL), exploits low-latency access to distributed mobile data to efficiently train an AI model while preserving data privacy. In this work, we study the spatial (i.e., spatially averaged) learning performance of FEEL deployed in a large-scale cellular network with spatially random distributed devices. Both the schemes of digital and analog transmission are considered, providing support of error-free uploading and over-the-air aggregation of local model updates by devices. The derived spatial convergence rate for digital transmission is found to be constrained by a limited number of active devices regardless of device density and converges to the ground-true rate exponentially fast as the number grows. The population of active devices depends on network parameters such as processing gain and signal-to-interference threshold for decoding. On the other hand, the limit does not exist for uncoded analog transmission. In this case, the spatial convergence rate is slowed down due to the direct exposure of signals to the perturbation of inter-cell interference. Nevertheless, the effect diminishes when devices are dense as interference is averaged out by aggressive over-the-air aggregation. In terms of learning latency (in second), analog transmission is preferred to the digital scheme as the former dramatically reduces multi-access latency by enabling simultaneous access.