We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online. The goal is to minimise the overall cost under a real-time pricing scheme. While previous works have introduced centralised approaches, we formulate the smart grid environment as a Markov game, where each household is a decentralised agent, and the grid operator produces a price signal that adapts to the energy demand. The main challenge addressed in our approach is partial observability and perceived non-stationarity of the environment from the viewpoint of each agent. We propose a multi-agent extension of a deep actor-critic algorithm that shows success in learning in this environment. This algorithm learns a centralised critic that coordinates training of all agents. Our approach thus uses centralised learning but decentralised execution. Simulation results show that our online deep reinforcement learning method can reduce both the peak-to-average ratio of total energy consumed and the cost of electricity for all households based purely on instantaneous observations and a price signal.