Federated Learning (FL) refers to distributed protocols that avoid direct raw data exchange among the participating devices while training for a common learning task. This way, FL can potentially reduce the information on the local data sets that is leaked via communications. In order to provide formal privacy guarantees, however, it is generally necessary to put in place additional masking mechanisms. When FL is implemented in wireless systems via uncoded transmission, the channel noise can directly act as a privacy-inducing mechanism. This paper demonstrates that, as long as the privacy constraint level, measured via differential privacy (DP), is below a threshold that decreases with the signal-to-noise ratio (SNR), uncoded transmission achieves privacy for free, i.e., without affecting the learning performance. More generally, this work studies adaptive power allocation (PA) for decentralized gradient descent in wireless FL with the aim of minimizing the learning optimality gap under privacy and power constraints. Both orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) transmission with over-the-air-computing are studied, and solutions are obtained in closed form for an offline optimization setting. Furthermore, heuristic online methods are proposed that leverage iterative one-step-ahead optimization. The importance of dynamic PA and the potential benefits of NOMA versus OMA are demonstrated through extensive simulations.