The stochastic and dynamic nature of renewable energy sources and power electronic devices are creating unique challenges for modern power systems. One such challenge is that the conventional mathematical systems models-based optimal active power dispatch (OAPD) method is limited in its ability to handle uncertainties caused by renewables and other system contingencies. In this paper, a deep reinforcement learning-based (DRL) method is presented to provide a near-optimal solution to the OAPD problem without system modeling. The DRL agent undergoes offline training, based on which, it is able to obtain the OAPD points under unseen scenarios, e.g., different load patterns. The DRL-based OAPD method is tested on the IEEE 14-bus system, thereby validating its feasibility to solve the OAPD problem. Its utility is further confirmed in that it can be leveraged as a key component for solving future model-free AC-OPF problems.