Large-scale agent systems have foreseeable applications in the near future. Estimating their macroscopic density is critical for many density-based optimization and control tasks, such as sensor deployment and city traffic scheduling. In this paper, we study the problem of estimating their dynamically varying probability density, given the agents individual dynamics (which can be nonlinear and time-varying) and their states observed in real-time. The density evolution is shown to satisfy a linear partial differential equation uniquely determined by the agents dynamics. We present a density filter which takes advantage of the system dynamics to gradually improve its estimation and is scalable to the agents population. Specifically, we use kernel density estimators (KDE) to construct a noisy measurement and show that, when the agents population is large, the measurement noise is approximately ``Gaussian. With this important property, infinite-dimensional Kalman filters are used to design density filters. It turns out that the covariance of measurement noise depends on the true density. This state-dependence makes it necessary to approximate the covariance in the associated operator Riccati equation, rendering the density filter suboptimal. The notion of input-to-state stability is used to prove that the performance of the suboptimal density filter remains close to the optimal one. Simulation results suggest that the proposed density filter is able to quickly recognize the underlying modes of the unknown density and automatically ignore outliers, and is robust to different choices of kernel bandwidth of KDE.