Encoder-decoder semantic segmentation models for electroluminescence images of thin-film photovoltaic modules


Abstract in English

We consider a series of image segmentation methods based on the deep neural networks in order to perform semantic segmentation of electroluminescence (EL) images of thin-film modules. We utilize the encoder-decoder deep neural network architecture. The framework is general such that it can easily be extended to other types of images (e.g. thermography) or solar cell technologies (e.g. crystalline silicon modules). The networks are trained and tested on a sample of images from a database with 6000 EL images of Copper Indium Gallium Diselenide (CIGS) thin film modules. We selected two types of features to extract, shunts and so called droplets. The latter feature is often observed in the set of images. Several models are tested using various combinations of encoder-decoder layers, and a procedure is proposed to select the best model. We show exemplary results with the best selected model. Furthermore, we applied the best model to the full set of 6000 images and demonstrate that the automated segmentation of EL images can reveal many subtle features which cannot be inferred from studying a small sample of images. We believe these features can contribute to process optimization and quality control.

Download