Modeling how human moves in the space is useful for policy-making in transportation, public safety, and public health. Human movements can be viewed as a dynamic process that human transits between states (eg, locations) over time. In the human world where intelligent agents like humans or vehicles with human drivers play an important role, the states of agents mostly describe human activities, and the state transition is influenced by both the human decisions and physical constraints from the real-world system (eg, agents need to spend time to move over a certain distance). Therefore, the modeling of state transition should include the modeling of the agents decision process and the physical system dynamics. In this paper, we propose ours to model state transition in human movement from a novel perspective, by learning the decision model and integrating the system dynamics. ours learns the human movement with Generative Adversarial Imitation Learning and integrates the stochastic constraints from system dynamics in the learning process. To the best of our knowledge, we are the first to learn to model the state transition of moving agents with system dynamics. In extensive experiments on real-world datasets, we demonstrate that the proposed method can generate trajectories similar to real-world ones, and outperform the state-of-the-art methods in predicting the next location and generating long-term future trajectories.