Autonomous cyber-physical systems (CPS) rely on the correct operation of numerous components, with state-of-the-art methods relying on machine learning (ML) and artificial intelligence (AI) components in various stages of sensing and control. This paper develops methods for estimating the reachable set and verifying safety properties of dynamical systems under control of neural network-based controllers that may be implemented in embedded software. The neural network controllers we consider are feedforward neural networks called multilayer perceptrons (MLP) with general activation functions. As such feedforward networks are memoryless, they may be abstractly represented as mathematical functions, and the reachability analysis of the network amounts to range (image) estimation of this function provided a set of inputs. By discretizing the input set of the MLP into a finite number of hyper-rectangular cells, our approach develops a linear programming (LP) based algorithm for over-approximating the output set of the MLP with its input set as a union of hyper-rectangular cells. Combining the over-approximation for the output set of an MLP based controller and reachable set computation routines for ordinary difference/differential equation (ODE) models, an algorithm is developed to estimate the reachable set of the closed-loop system. Finally, safety verification for neural network control systems can be performed by checking the existence of intersections between the estimated reachable set and unsafe regions. The approach is implemented in a computational software prototype and evaluated on numerical examples.