Regularizers helped deep neural networks prevent feature co-adaptations. Dropout,as a commonly used regularization technique, stochastically disables neuron ac-tivations during network optimization. However, such complete feature disposal can affect the feature representation and network understanding. Toward betterdescriptions of latent representations, we present DropGraph that learns regularization function by constructing a stand-alone graph from the backbone features. DropGraph first samples stochastic spatial feature vectors and then incorporates graph reasoning methods to generate feature map distortions. This add-on graph regularizes the network during training and can be completely skipped during inference. We provide intuitions on the linkage between graph reasoning andDropout with further discussions on how partial graph reasoning method reduces feature correlations. To this end, we extensively study the modeling of graphvertex dependencies and the utilization of the graph for distorting backbone featuremaps. DropGraph was validated on four tasks with a total of 7 different datasets.The experimental results show that our method outperforms other state-of-the-art regularizers while leaving the base model structure unmodified during inference.