ترغب بنشر مسار تعليمي؟ اضغط هنا

Reionization constraints using Principal Component Analysis

164   0   0.0 ( 0 )
 نشر من قبل T. Roy Choudhury
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Using a semi-analytical model developed by Choudhury & Ferrara (2005) we study the observational constraints on reionization via a principal component analysis (PCA). Assuming that reionization at z>6 is primarily driven by stellar sources, we decompose the unknown function N_{ion}(z), representing the number of photons in the IGM per baryon in collapsed objects, into its principal components and constrain the latter using the photoionization rate obtained from Ly-alpha forest Gunn-Peterson optical depth, the WMAP7 electron scattering optical depth and the redshift distribution of Lyman-limit systems at z sim 3.5. The main findings of our analysis are: (i) It is sufficient to model N_{ion}(z) over the redshift range 2<z<14 using 5 parameters to extract the maximum information contained within the data. (ii) All quantities related to reionization can be severely constrained for z<6 because of a large number of data points whereas constraints at z>6 are relatively loose. (iii) The weak constraints on N_{ion}(z) at z>6 do not allow to disentangle different feedback models with present data. There is a clear indication that N_{ion}(z) must increase at z>6, thus ruling out reionization by a single stellar population with non-evolving IMF, and/or star-forming efficiency, and/or photon escape fraction. The data allows for non-monotonic N_{ion}(z) which may contain sharp features around z sim 7. (iv) The PCA implies that reionization must be 99% completed between 5.8<z<10.3 (95% confidence level) and is expected to be 50% complete at z approx 9.5-12. With future data sets, like those obtained by Planck, the z>6 constraints will be significantly improved.


قيم البحث

اقرأ أيضاً

We show how to efficiently project a vector onto the top principal components of a matrix, without explicitly computing these components. Specifically, we introduce an iterative algorithm that provably computes the projection using few calls to any b lack-box routine for ridge regression. By avoiding explicit principal component analysis (PCA), our algorithm is the first with no runtime dependence on the number of top principal components. We show that it can be used to give a fast iterative method for the popular principal component regression problem, giving the first major runtime improvement over the naive method of combining PCA with regression. To achieve our results, we first observe that ridge regression can be used to obtain a smooth projection onto the top principal components. We then sharpen this approximation to true projection using a low-degree polynomial approximation to the matrix step function. Step function approximation is a topic of long-term interest in scientific computing. We extend prior theory by constructing polynomials with simple iterative structure and rigorously analyzing their behavior under limited precision.
Principal Component Analysis (PCA) finds a linear mapping and maximizes the variance of the data which makes PCA sensitive to outliers and may cause wrong eigendirection. In this paper, we propose techniques to solve this problem; we use the data-cen tering method and reestimate the covariance matrix using robust statistic techniques such as median, robust scaling which is a booster to data-centering and Huber M-estimator which measures the presentation of outliers and reweight them with small values. The results on several real world data sets show that our proposed method handles outliers and gains better results than the original PCA and provides the same accuracy with lower computation cost than the Kernel PCA using the polynomial kernel in classification tasks.
372 - Yang Liu , Hong Li , Si-Yu Li 2015
The study of reionization history plays an important role in understanding the evolution of our universe. It is commonly believed that the intergalactic medium (IGM) in our universe are fully ionized today, however the reionizing process remains to b e mysterious. A simple instantaneous reionization process is usually adopted in modern cosmology without direct observational evidence. However, the history of ionization fraction, $x_e(z)$ will influence cosmic microwave background (CMB) observables and constraints on optical depth $tau$. With the mocked future data sets based on featured reionization model, we find the bias on $tau$ introduced by instantaneous model can not be neglected. In this paper, we study the cosmic reionization history in a model independent way, the so called principle component analysis (PCA) method, and reconstruct $x_e (z)$ at different redshift $z$ with the data sets of Planck, WMAP 9 years temperature and polarization power spectra, combining with the baryon acoustic oscillation (BAO) from galaxy survey and type Ia supernovae (SN) Union 2.1 sample respectively. The results show that reconstructed $x_e(z)$ is consistent with instantaneous behavior, however, there exists slight deviation from this behavior at some epoch. With PCA method, after abandoning the noisy modes, we get stronger constraints, and the hints for featured $x_e(z)$ evolution could become a little more obvious.
The next generation of weak lensing surveys will trace the evolution of matter perturbations and gravitational potentials from the matter dominated epoch until today. Along with constraining the dynamics of dark energy, they will probe the relations between matter overdensities, local curvature, and the Newtonian potential. We work with two functions of time and scale to account for any modifications of these relations in the linear regime from those in the LCDM model. We perform a Principal Component Analysis (PCA) to find the eigenmodes and eigenvalues of these functions for surveys like DES and LSST. This paper builds on and significantly extends the PCA analysis of Zhao et al. (2009) in several ways. In particular, we consider the impact of some of the systematic effects expected in weak lensing surveys. We also present the PCA in terms of other choices of the two functions needed to parameterize modified growth on linear scales, and discuss their merits. We analyze the degeneracy between the modified growth functions and other cosmological parameters, paying special attention to the effective equation of state w(z). Finally, we demonstrate the utility of the PCA as an efficient data compression stage which enables one to easily derive constraints on parameters of specific models without recalculating Fisher matrices from scratch.
73 - P. Tandon 2016
Performance of nuclear threat detection systems based on gamma-ray spectrometry often strongly depends on the ability to identify the part of measured signal that can be attributed to background radiation. We have successfully applied a method based on Principal Component Analysis (PCA) to obtain a compact null-space model of background spectra using PCA projection residuals to derive a source detection score. We have shown the methods utility in a threat detection system using mobile spectrometers in urban scenes (Tandon et al 2012). While it is commonly assumed that measured photon counts follow a Poisson process, standard PCA makes a Gaussian assumption about the data distribution, which may be a poor approximation when photon counts are low. This paper studies whether and in what conditions PCA with a Poisson-based loss function (Poisson PCA) can outperform standard Gaussian PCA in modeling background radiation to enable more sensitive and specific nuclear threat detection.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا