No Arabic abstract
This paper is one of a series describing the performance and accuracy of map-making codes as assessed by the Planck CTP working group. We compare the performance of multiple codes written by different groups for making polarized maps from Planck-sized, all-sky cosmic microwave background (CMB) data. Three of the codes are based on destriping algorithm, whereas the other three are implementations of a maximum-likelihood algorithm. Previous papers in the series described simulations at 100 GHz (Poutanen et al. 2006) and 217 GHz (Ashdown et al. 2006). In this paper we make maps (temperature and polarisation) from the simulated one-year observations of four 30 GHz detectors of Planck Low Frequency Instrument (LFI). We used Planck Level S simulation pipeline to produce the observed time-ordered-data streams (TOD). Our previous studies considered polarisation observations for the CMB only. For this paper we increased the realism of the simulations and included polarized galactic foregrounds to our sky model. Our simulated TODs comprised of dipole, CMB, diffuse galactic emissions, extragalactic radio sources, and detector noise. The strong subpixel signal gradients arising from the foreground signals couple to the output map through the map-making and cause an error (signal error) in the maps. Destriping codes have smaller signal error than the maximum-likelihood codes. We examined a number of schemes to reduce this error. On the other hand, the maximum-likelihood map-making codes can produce maps with lower residual noise than destriping codes.
The Planck satellite will observe the full sky at nine frequencies from 30 to 857 GHz. The goal of this paper is to examine the effects of four realistic instrument systematics in the 30 GHz frequency maps: non-axially-symmetric beams, sample integration, sorption cooler noise, and pointing errors. We simulated one year long observations of four 30 GHz detectors. The simulated timestreams contained CMB, foreground components (both galactic and extra-galactic), instrument noise (correlated and white), and the four instrument systematic effects. We made maps from the timelines and examined the magnitudes of the systematics effects in the maps and their angular power spectra. We also compared the maps of different mapmaking codes to see how they performed. We used five mapmaking codes (two destripers and three optimal codes). None of our mapmaking codes makes an attempt to deconvolve the beam from its output map. Therefore all our maps had similar smoothing due to beams and sample integration. Temperature to polarization cross-coupling due to beam mismatch causes a detectable bias in the TE spectrum of the CMB map. The effects of cooler noise and pointing errors did not appear to be major concerns for the 30 GHz channel. The only essential difference found so far between mapmaking codes that affects accuracy (in terms of residual RMS) is baseline length. All optimal codes give essentially indistinguishable results. A destriper gives the same result as the optimal codes when the baseline is set short enough. For longer baselines destripers require less computing resources but deliver a noisier map.
We compare the performance of multiple codes written by different groups for making polarized maps from Planck-sized, all-sky cosmic microwave background (CMB) data. Three of the codes are based on a destriping algorithm; the other three are implementations of an optimal maximum-likelihood algorithm. Time-ordered data (TOD) were simulated using the Planck Level-S simulation pipeline. Several cases of temperature-only data were run to test that the codes could handle large datasets, and to explore effects such as the precision of the pointing data. Based on these preliminary results, TOD were generated for a set of four 217 GHz detectors (the minimum number required to produce I, Q, and U maps) under two different scanning strategies, with and without noise. Following correction of various problems revealed by the early simulation, all codes were able to handle the large data volume that Planck will produce. Differences in maps produced are small but noticeable; differences in computing resources are large.
The Planck Collaboration made its final data release in 2018. In this paper we describe beam-deconvolution map products made from Planck LFI data using the artDeco deconvolution code to symmetrize the effective beam. The deconvolution results are auxiliary data products, available through the Planck Legacy Archive. Analysis of these deconvolved survey difference maps reveals signs of residual signal in the 30-GHz and 44-GHz frequency channels. We produce low-resolution maps and corresponding noise covariance matrices (NCVMs). The NCVMs agree reasonably well with the half-ring noise estimates except for 44 GHz, where we observe an asymmetry between $EE$ and $BB$ noise spectra, possibly a sign of further unresolved systematics.
To asses stability against 1/f noise, the Low Frequency Instrument (LFI) onboard the Planck mission will acquire data at a rate much higher than the data rate allowed by its telemetry bandwith of 35.5 kbps. The data are processed by an onboard pipeline, followed onground by a reversing step. This paper illustrates the LFI scientific onboard processing to fit the allowed datarate. This is a lossy process tuned by using a set of 5 parameters Naver, r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level of distortion introduced by the onboard processing, EpsilonQ, as a function of these parameters. It describes the method of optimizing the onboard processing chain. The tuning procedure is based on a optimization algorithm applied to unprocessed and uncompressed raw data provided either by simulations, prelaunch tests or data taken from LFI operating in diagnostic mode. All the needed optimization steps are performed by an automated tool, OCA2, which ends with optimized parameters and produces a set of statistical indicators, among them the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr = 2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup the process an analytical model is developed that is able to extract most of the relevant information on EpsilonQ and Cr as a function of the signal statistics and the processing parameters. This model will be of interest for the instrument data analysis. The method was applied during ground tests when the instrument was operating in conditions representative of flight. Optimized parameters were obtained and the performance has been verified, the required data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of 3.8% of white noise rms well within the requirements.
We present the NPIPE processing pipeline, which produces calibrated frequency maps in temperature and polarization from data from the Planck Low Frequency Instrument (LFI) and High Frequency Instrument (HFI) using high-performance computers. NPIPE represents a natural evolution of previous Planck analysis efforts, and combines some of the most powerful features of the separate LFI and HFI analysis pipelines. The net effect of the improvements is lower levels of noise and systematics in both frequency and component maps at essentially all angular scales, as well as notably improved internal consistency between the various frequency channels. Based on the NPIPE maps, we present the first estimate of the Solar dipole determined through component separation across all nine Planck frequencies. The amplitude is ($3366.6 pm 2.7$)$mu$K, consistent with, albeit slightly higher than, earlier estimates. From the large-scale polarization data, we derive an updated estimate of the optical depth of reionization of $tau = 0.051 pm 0.006$, which appears robust with respect to data and sky cuts. There are 600 complete signal, noise and systematics simulations of the full-frequency and detector-set maps. As a Planck first, these simulations include full time-domain processing of the beam-convolved CMB anisotropies. The release of NPIPE maps and simulations is accompanied with a complete suite of raw and processed time-ordered data and the software, scripts, auxiliary data, and parameter files needed to improve further on the analysis and to run matching simulations.