Roads Extraction from Satellite Images using Convolution Neural Network Model (Deeplabv3+) A Case Study in Lattakia city


Abstract in English

The purpose of this paper is to extract roads from satellite images, based on developing the performance of the deep convolutional neural network model (Deeplabv3+) for roads segmentation, and to evaluate and test the performance of this model after training on our data.This experimental study was applied at Google Colab cloud platform, by software instructions and advanced libraries in the Python.We conducted data pre -processing to prepare ground truth masks,then we trained the model.The training and validation process required (Epochs=4), by(Patch Size=4images).The Loss function decreased to its minimum value (0.025). Training time was three hours and ten minutes, aided by the advanced Graphics Processing Unit (GPU) and additional RAM.We achieved good results in evaluating the accuracy of the predictions of the trained model (IoU = 0.953). It was tested on two different areas, one of which is residential and the other agricultural in Lattakia city. The results showed that the trained model (DeepLabv3+) in our research can extract the road network accurately and effectively.But its performance is poor in some areas which includes tree shadows on the edges of the road, and where the spectral characteristics are similar to the road, such as the roofs of some buildings, and it is invalid for extracting side and unpaved roads. The research presented several recommendations to improve the performance of the (Deeplabv3+) in extracting roads from high-resolution satellite images, which is useful for updating road maps and urban planning works.

References used

The purpose of this paper is to extract roads from satellite images, based on developing the performance of the deep convolutional neural network model (Deeplabv3+) forroads segmentation, and to evaluate and test the performance of this model after training on our data.This experimental study was applied atGoogle Colab cloud platform, by software instructions and advanced libraries in the Python.We conducted data pre -processing to prepare ground truth masks,thenwe trained the model.Thetraining and validation process required (Epochs=4), by(Patch Size=4images).The Loss function decreased to its minimum value (0.025). Training time was three hours and ten minutes, aided by the advanced Graphics Processing Unit (GPU) and additional RAM.We achieved good results in evaluating the accuracy of the predictions of the trained model (IoU = 0.953). It was tested on two different areas, one of which is residential and the other agricultural in Lattakia city. The results showed that the trained model (DeepLabv3+) in our research can extract the road network accurately and effectively.But its performance is poor in some areas which includes tree shadows on the edges of the road, and where the spectral characteristics are similar to the road, such as the roofs of some buildings, and it is invalid for extracting side and unpaved roads. The research presented several recommendations to improve the performance of the (Deeplabv3+) in extracting roads from high-resolution satellite images, which is useful for updating road maps and urban planning works.
Christopher, S.;Christopher, H. DeepLearningNeuralNetworks forLandUse LandCoverMapping. IGARSS -IEEE International Geoscience and Remote Sensing Symposium, 2018, pp. 2995–2990
A Beginner’s Guide to Segmentation in Satellite Images: Walking through Machine Learning Techniques for Image Segmentation and Applying Them to Satellite Imagery. https://www.gsitechnology.com/Beginners-Guide-to-Segmentation-in-Satellite-Images(Accessed95-92-2022)
Chen, L.; Qianli, Z.; Papandreou, G.; Schroff, F.; Adam,H.Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, Computer Vision –ECCV, 2018, pp 833–851
Darwishe, D.; Mohammad, A.; Chaaban, F. Developing a Model of Deep Learning by ANNs for Urban Areas Extraction from Remote Sensing Images -Study Area: HomsTartous, Al-Baath University Journal, V. 43, NO. 7, 2021, PP.11-42

Download