Photometric redshifts for the Kilo-Degree Survey. Machine-learning analysis with artificial neural networks


Abstract in English

We present a machine-learning photometric redshift analysis of the Kilo-Degree Survey Data Release 3, using two neural-network based techniques: ANNz2 and MLPQNA. Despite limited coverage of spectroscopic training sets, these ML codes provide photo-zs of quality comparable to, if not better than, those from the BPZ code, at least up to zphot<0.9 and r<23.5. At the bright end of r<20, where very complete spectroscopic data overlapping with KiDS are available, the performance of the ML photo-zs clearly surpasses that of BPZ, currently the primary photo-z method for KiDS. Using the Galaxy And Mass Assembly (GAMA) spectroscopic survey as calibration, we furthermore study how photo-zs improve for bright sources when photometric parameters additional to magnitudes are included in the photo-z derivation, as well as when VIKING and WISE infrared bands are added. While the fiducial four-band ugri setup gives a photo-z bias $delta z=-2e-4$ and scatter $sigma_z<0.022$ at mean z = 0.23, combining magnitudes, colours, and galaxy sizes reduces the scatter by ~7% and the bias by an order of magnitude. Once the ugri and IR magnitudes are joined into 12-band photometry spanning up to 12 $mu$, the scatter decreases by more than 10% over the fiducial case. Finally, using the 12 bands together with optical colours and linear sizes gives $delta z<4e-5$ and $sigma_z<0.019$. This paper also serves as a reference for two public photo-z catalogues accompanying KiDS DR3, both obtained using the ANNz2 code. The first one, of general purpose, includes all the 39 million KiDS sources with four-band ugri measurements in DR3. The second dataset, optimized for low-redshift studies such as galaxy-galaxy lensing, is limited to r<20, and provides photo-zs of much better quality than in the full-depth case thanks to incorporating optical magnitudes, colours, and sizes in the GAMA-calibrated photo-z derivation.

Download