In this paper, we propose an end-to-end Mandarin tone classification method from continuous speech utterances utilizing both the spectrogram and the short-term context information as the input. Both spectrograms and context segment features are used to train the tone classifier. We first divide the spectrogram frames into syllable segments using force alignment results produced by an ASR model. Then we extract the short-term segment features to capture the context information across multiple syllables. Feeding both the spectrogram and the short-term context segment features into an end-to-end model could significantly improve the performance. Experiments are performed on a large-scale open-source Mandarin speech dataset to evaluate the proposed method. Results show that this method improves the classification accuracy from 79.5% to 92.6% on the AISHELL3 database.