No Arabic abstract
From SDSS commissioning photometric and spectroscopic data, we investigate the utility of photometric redshift techniques to the task of estimating QSO redshifts. We consider empirical methods (e.g. nearest-neighbor searches and polynomial fitting), standard spectral template fitting and hybrid approaches (i.e. training spectral templates from spectroscopic and photometric observations of QSOs). We find that in all cases, due to the presence of strong emission-lines within the QSO spectra, the nearest-neighbor and template fitting methods are superior to the polynomial fitting approach. Applying a novel reconstruction technique, we can, from the SDSS multicolor photometry, reconstruct a statistical representation of the underlying SEDs of the SDSS QSOs. Although, the reconstructed templates are based on only broadband photometry the common emission lines present within the QSO spectra can be recovered in the resulting spectral energy distributions. The technique should be useful in searching for spectral differences among QSOs at a given redshift, in searching for spectral evolution of QSOs, in comparing photometric redshifts for objects beyond the SDSS spectroscopic sample with those in the well calibrated photometric redshifts for objects brighter than 20th magnitude and in searching for systematic and time variable effects in the SDSS broad band photometric and spectral photometric calibrations.
Machine learning techniques, specifically the k-nearest neighbour algorithm applied to optical band colours, have had some success in predicting photometric redshifts of quasi-stellar objects (QSOs): Although the mean of differences between the spectroscopic and photometric redshifts is close to zero, the distribution of these differences remains wide and distinctly non-Gaussian. As per our previous empirical estimate of photometric redshifts, we find that the predictions can be significantly improved by adding colours from other wavebands, namely the near-infrared and ultraviolet. Self-testing this, by using half of the 33 643 strong QSO sample to train the algorithm, results in a significantly narrower spread for the remaining half of the sample. Using the whole QSO sample to train the algorithm, the same set of magnitudes return a similar spread for a sample of radio sources (quasars). Although the matching coincidence is relatively low (739 of the 3663 sources having photometry in the relevant bands), this is still significantly larger than from the empirical method (2%) and thus may provide a method with which to obtain redshifts for the vast number of continuum radio sources expected to be detected with the next generation of large radio telescopes.
The scientific value of the next generation of large continuum surveys would be greatly increased if the redshifts of the newly detected sources could be rapidly and reliably estimated. Given the observational expense of obtaining spectroscopic redshifts for the large number of new detections expected, there has been substantial recent work on using machine learning techniques to obtain photometric redshifts. Here we compare the accuracy of the predicted photometric redshifts obtained from Deep Learning(DL) with the k-Nearest Neighbour (kNN) and the Decision Tree Regression (DTR) algorithms. We find using a combination of near-infrared, visible and ultraviolet magnitudes, trained upon a sample of SDSS QSOs, that the kNN and DL algorithms produce the best self-validation result with a standard deviation of {sigma} = 0.24. Testing on various sub-samples, we find that the DL algorithm generally has lower values of {sigma}, in addition to exhibiting a better performance in other measures. Our DL method, which uses an easy to implement off-the-shelf algorithm with no filtering nor removal of outliers, performs similarly to other, more complex, algorithms, resulting in an accuracy of {Delta}z < 0.1$ up to z ~ 2.5. Applying the DL algorithm trained on our 70,000 strong sample to other independent (radio-selected) datasets, we find {sigma} < 0.36 over a wide range of radio flux densities. This indicates much potential in using this method to determine photometric redshifts of quasars detected with the Square Kilometre Array.
Upcoming imaging surveys, such as LSST, will provide an unprecedented view of the Universe, but with limited resolution along the line-of-sight. Common ways to increase resolution in the third dimension, and reduce misclassifications, include observing a wider wavelength range and/or combining the broad-band imaging with higher spectral resolution data. The challenge with these approaches is matching the depth of these ancillary data with the original imaging survey. However, while a full 3D map is required for some science, there are many situations where only the statistical distribution of objects (dN/dz) in the line-of-sight direction is needed. In such situations, there is no need to measure the fluxes of individual objects in all of the surveys. Rather a stacking procedure can be used to perform an `ensemble photo-z. We show how a shallow, higher spectral resolution survey can be used to measure dN/dz for stacks of galaxies which coincide in a deeper, lower resolution survey. The galaxies in the deeper survey do not even need to appear individually in the shallow survey. We give a toy model example to illustrate tradeoffs and considerations for applying this method. This approach will allow deep imaging surveys to leverage the high resolution of spectroscopic and narrow/medium band surveys underway, even when the latter do not have the same reach to high redshift.
Machine learning (ML) is a standard approach for estimating the redshifts of galaxies when only photometric information is available. ML photo-z solutions have traditionally ignored the morphological information available in galaxy images or partly included it in the form of hand-crafted features, with mixed results. We train a morphology-aware photometric redshift machine using modern deep learning tools. It uses a custom architecture that jointly trains on galaxy fluxes, colors and images. Galaxy-integrated quantities are fed to a Multi-Layer Perceptron (MLP) branch while images are fed to a convolutional (convnet) branch that can learn relevant morphological features. This split MLP-convnet architecture, which aims to disentangle strong photometric features from comparatively weak morphological ones, proves important for strong performance: a regular convnet-only architecture, while exposed to all available photometric information in images, delivers comparatively poor performance. We present a cross-validated MLP-convnet model trained on 130,000 SDSS-DR12 galaxies that outperforms a hyperoptimized Gradient Boosting solution (hyperopt+XGBoost), as well as the equivalent MLP-only architecture, on the redshift bias metric. The 4-fold cross-validated MLP-convnet model achieves a bias $delta z / (1+z) =-0.70 pm 1 times 10^{-3} $, approaching the performance of a reference ANNZ2 ensemble of 100 distinct models trained on a comparable dataset. The relative performance of the morphology-aware and morphology-blind models indicates that galaxy morphology does improve ML-based photometric redshift estimation.
Information about the spin state of asteroids is important for our understanding of the dynamical processes affecting them. However, spin properties of asteroids are known for only a small fraction of the whole population. To enlarge the sample of asteroids with a known rotation state and basic shape properties, we combined sparse-in-time photometry from the Lowell Observatory Database with flux measurements from NASAs WISE satellite. We applied the light curve inversion method to the combined data. The thermal infrared data from WISE were treated as reflected light because the shapes of thermal and visual light curves are similar enough for our purposes. While sparse data cover a wide range of geometries over many years, WISE data typically cover an interval of tens of hours, which is comparable to the typical rotation period of asteroids. The search for best-fitting models was done in the framework of the Asteroids@home distributed computing project. By processing the data for almost 75,000 asteroids, we derived unique shape models for about 900 of them. Some of them were already available in the DAMIT database and served us as a consistency check of our approach. In total, we derived new models for 662 asteroids, which significantly increased the total number of asteroids for which their rotation state and shape are known. For another 789 asteroids, we were able to determine their sidereal rotation period and estimate the ecliptic latitude of the spin axis direction. We studied the distribution of spins in the asteroid population. We revealed a significant discrepancy between the number of prograde and retrograde rotators for asteroids smaller than about 10 km. Combining optical photometry with thermal infrared light curves is an efficient approach to obtaining new physical models of asteroids.