No Arabic abstract
We have a developed a new method for fitting spectral energy distributions (SEDs) to identify and constrain the physical properties of high-redshift (4 < z < 8) galaxies. Our approach uses an implementation of Bayesian based Markov Chain Monte Carlo (PiMC^2) that allows us to compare observations to arbitrarily complex models and to compute 95% credible intervals that provide robust constraints for the model parameters. The work is presented in 2 sections. In the first, we test PiMC^2 using simulated SEDs to not only confirm the recovery of the known inputs but to assess the limitations of the method and identify potential hazards of SED fitting when applied specifically to high redshift (z>4) galaxies. Our tests reveal five critical results: 1) the ability to confidently constrain metallicity, population ages, and Av all require photometric accuracy better than what is currently achievable (i.e. less than a few percent); 2) the ability to confidently constrain stellar masses (within a factor of two) can be achieved without the need for high-precision photometry; 3) the addition of IRAC photometry does not guarantee that tighter constraints of the stellar masses and ages can be defined; 4) different assumptions about the star formation history can lead to significant biases in mass and age estimates; and 5) we are able to constrain stellar age and Av of objects that are both young and relatively dust free. In the second part of the paper we apply PiMC^2 to 17 4<z<8 objects, including the GRAPES Ly alpha sample (4<z<6), supplemented by HST/WFC3 near-IR observations, and several broad band selected z>6 galaxies. Using PiMC^2, we are able to constrain the stellar mass of these objects and in some cases their stellar age and find no evidence that any of these sources formed at a redshift much larger than z_f=8, a time when the Universe was ~ 0.6 Gyr old.
The progenitor and explosion properties of type II supernovae (SNe II) are fundamental to understand the evolution of massive stars. Special interest has been given to the range of initial masses of their progenitors, but despite the efforts made, it is still uncertain. Direct imaging of progenitors in pre-explosion images point out an upper initial mass cutoff of $sim$18$M_{odot}$. However, this is in tension with previous studies in which progenitor masses inferred by light curve modelling tend to favour high-mass solutions. Moreover, it has been argued that light curve modelling alone cannot provide a unique solution for the progenitor and explosion properties of SNe II. We develop a robust method which helps us to constrain the physical parameters of SNe II by fitting simultaneously their bolometric light curve and the evolution of the photospheric velocity to hydrodynamical models using statistical inference techniques. Pre-supernova red supergiant models were created using the stellar evolution code MESA, varying the initial progenitor mass. The explosion of these progenitors was then processed through hydrodynamical simulations, where the explosion energy, synthesised nickel mass, and the latters spatial distribution within the ejecta were changed. We compare to observations via Markov chain Monte Carlo methods. We apply this method to a well-studied set of SNe with an observed progenitor in pre-explosion images and compare with results in the literature. Progenitor mass constraints are found to be consistent between our results and those derived by pre-SN imaging and the analysis of late-time spectral modelling. We have developed a robust method to infer progenitor and explosion properties of SN II progenitors which is consistent with other methods in the literature, which suggests that hydrodynamical modelling is able to accurately constrain physical properties of SNe II.
This is a supplement to the article Markov Chain Monte Carlo Based on Deterministic Transformations available at http://arxiv.org/abs/1106.5850
We fit the spectral energy distributions (SEDs) of 46 GeV - TeV BL Lac objects in the frame of leptonic one-zone synchrotron self-Compton (SSC) model and investigate the physical properties of these objects. We use the Markov Chain Monte Carlo (MCMC) method to obtain the basic parameters, such as magnetic field (B), the break energy of the relativistic electron distribution ($gamma_{rm{b}}$) and the electron energy spectral index. Based on the modeling results, we support the following scenarios on GeV-TeV BL Lac objects: (1) Some sources have large Doppler factors, implying other radiation mechanism should be considered. (2) Comparing with FSRQs, GeV-TeV BL Lac objects have weaker magnetic field and larger Doppler factor, which cause the ineffective cooling and shift the SEDs to higher bands. Their jet powers are around $4.0times 10^{45}~rm{ ergcdot s}^{-1}$, comparing with radiation power, $5.0times 10^{42}~rm{ ergcdot s}^{-1}$, indicating that only a small fraction of jet power is transformed into the emission power. (3) For some BL Lacs with large Doppler factors, their jet components could have two substructures, e.g., the fast core and the slow sheath. For most GeV-TeV BL Lacs, Kelvin-Helmholtz instabilities are suppressed by their higher magnetic fields, leading few micro-variability or intro-day variability in the optical bands. (4) Combined with a sample of FSRQs, an anti-correlation between the peak luminosity $L_{rm {pk}}$ and the peak frequency $ u_{rm {pk}}$ is obtained, favoring the blazar sequence scenario. In addition, an anti-correlation between the jet power $P_{rm {jet}}$ and the break Lorentz factor $gamma_{rm {b}}$ also supports the blazar sequence.
An important task in machine learning and statistics is the approximation of a probability measure by an empirical measure supported on a discrete point set. Stein Points are a class of algorithms for this task, which proceed by sequentially minimising a Stein discrepancy between the empirical measure and the target and, hence, require the solution of a non-convex optimisation problem to obtain each new point. This paper removes the need to solve this optimisation problem by, instead, selecting each new point based on a Markov chain sample path. This significantly reduces the computational cost of Stein Points and leads to a suite of algorithms that are straightforward to implement. The new algorithms are illustrated on a set of challenging Bayesian inference problems, and rigorous theoretical guarantees of consistency are established.
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers, and a single PMCMC sampler with an equivalent memory and computational budget. An additional advantage of the iPMCMC method is that it is suitable for distributed and multi-core architectures.