ترغب بنشر مسار تعليمي؟ اضغط هنا

The Improved Ep-TL-Lp Diagram and a Robust Regression Method

33   0   0.0 ( 0 )
 نشر من قبل Ryo Tsutsui
 تاريخ النشر 2011
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The accuracy and reliability of gamma-ray bursts (GRBs) as distance indicators are strongly restricted by their systematic errors which are larger than statistical errors. These systematic errors might come from either intrinsic variations of GRBs, or systematic errors in observations. In this paper, we consider the possible origins of systematic errors in the following observables, (i) the spectral peak energies (Ep) estimated by Cut-off power law (CPL) function, (ii) the peak luminosities (Lp) estimated by 1 second in observer time. Removing or correcting them, we reveal the true intrinsic variation of the Ep-TL-Lp relation of GRBs. Here TL is the third parameter of GRBs defined as TL ~ Eiso / Lp. Not only the time resolution of Lp is converted from observer time to GRB rest frame time, the time resolution with the largest likelihood is sought for. After removing obvious origin of systematic errors in observation mentioned above, there seems to be still remain some outliers. For this reason, we take account another origin of the systematic error as below, (iii) the contamination of short GRBs or other populations. To estimate the best fit parameters of the Ep-TL-Lp relations from data including outliers, we develop a new method which combine robust regression and an outlier identification technique. Using our new method for 18 GRBs with {sigma}Ep/Ep < 0.1, we detect 6 outliers and find the Ep-TL-Lp relation become the tightest around 3 second.

قيم البحث

اقرأ أيضاً

Context: There is a wide discrepancy in current estimates of the strength of convection flows in the solar interior obtained using different helioseismic methods applied to observations from SDO/HMI. The cause for these disparities is not known. Aims : As one step in the effort to resolve this discrepancy, we aim to characterize the multi-ridge fitting code for ring-diagram helioseismic analysis that is used to obtain flow estimates from local power spectra of solar oscillations. Methods: We updated the multi-ridge fitting code developed by Greer et al.(2014) to solve several problems we identified through our inspection of the code. In particular, we changed the merit function to account for the smoothing of the power spectra, model for the power spectrum, and noise estimates. We used Monte Carlo simulations to generate synthetic data and to characterize the noise and bias of the updated code by fitting these synthetic data. Results: The bias in the output fit parameters, apart from the parameter describing the amplitude of the p-mode resonances in the power spectrum, is below what can be measured from the Monte-Carlo simulations. The amplitude parameters are underestimated; this is a consequence of choosing to fit the logarithm of the averaged power. We defer fixing this problem as it is well understood and not significant for measuring flows in the solar interior. The scatter in the fit parameters from the Monte-Carlo simulations is well-modeled by the formal error estimates from the code. Conclusions: We document and demonstrate a reliable multi-ridge fitting method for ring-diagram analysis. The differences between the updated fitting results and the original results are less than one order of magnitude and therefore we suspect that the changes will not eliminate the aforementioned orders-of-magnitude discrepancy in the amplitude of convective flows in the solar interior.
In this paper we investigate the structure of the fundamental polytope used in the Linear Programming decoding introduced by Feldman, Karger and Wainwright. We begin by showing that for expander codes, every fractional pseudocodeword always has at le ast a constant fraction of non-integral bits. We then prove that for expander codes, the active set of any fractional pseudocodeword is smaller by a constant fraction than the active set of any codeword. We further exploit these geometrical properties to devise an improved decoding algorithm with the same complexity order as LP decoding that provably performs better, for any blocklength. It proceeds by guessing facets of the polytope, and then resolving the linear program on these facets. While the LP decoder succeeds only if the ML codeword has the highest likelihood over all pseudocodewords, we prove that the proposed algorithm, when applied to suitable expander codes, succeeds unless there exist a certain number of pseudocodewords, all adjacent to the ML codeword on the LP decoding polytope, and with higher likelihood than the ML codeword. We then describe an extended algorithm, still with polynomial complexity, that succeeds as long as there are at most polynomially many pseudocodewords above the ML codeword.
We demonstrate the ability of convolutional neural networks (CNNs) to mitigate systematics in the virial scaling relation and produce dynamical mass estimates of galaxy clusters with remarkably low bias and scatter. We present two models, CNN$_mathrm {1D}$ and CNN$_mathrm{2D}$, which leverage this deep learning tool to infer cluster masses from distributions of member galaxy dynamics. Our first model, CNN$_text{1D}$, infers cluster mass directly from the distribution of member galaxy line-of-sight velocities. Our second model, CNN$_text{2D}$, extends the input space of CNN$_text{1D}$ to learn on the joint distribution of galaxy line-of-sight velocities and projected radial distances. We train each model as a regression over cluster mass using a labeled catalog of realistic mock cluster observations generated from the MultiDark simulation and UniverseMachine catalog. We then evaluate the performance of each model on an independent set of mock observations selected from the same simulated catalog. The CNN models produce cluster mass predictions with lognormal residuals of scatter as low as $0.132$ dex, greater than a factor of 2 improvement over the classical $M$-$sigma$ power-law estimator. Furthermore, the CNN model reduces prediction scatter relative to similar machine learning approaches by up to $17%$ while executing in drastically shorter training and evaluation times (by a factor of 30) and producing considerably more robust mass predictions (improving prediction stability under variations in galaxy sampling rate by $30%$).
We reconsider correlations among the spectral peak energy ($E_p$), 1-second peak luminosity ($L_p$) and isotropic energy (Eiso), using the database constructed by citet{yonetoku10} which consists of 109 Gamma-Ray Bursts (GRBs) whose redshifts are kno wn and $E_p$, $L_p$ and Eiso are well determined. We divide the events into two groups by their data quality. One (gold data set) consists of GRBs with peak energies determined by the Band model with four free parameters. On the other hand, GRBs in the other group (bronze data set) have relatively poor energy spectra so that their peak energies were determined by the Band model with fixed spectral index (i.e. three free parameters) or by the Cut-off power law (CPL) model with three free parameters. Using only the gold data set we found the intrinsic dispersion in $log L_p$ ($=sigma_{rm int}$) is 0.13 and 0.22 for tsutsui correlation ($T_L equiv E_{rm iso}/L_p$) and yonetoku correlation, respectively. We also find that GRBs in the bronze data set have systematically larger $E_p$ than expected by the correlations constructed with the gold data set. This means that the intrinsic dispersion of correlations among $E_p$, $L_p$, and Eiso of GRBs depends on the quality of data set. At present, using tsutsui correlation with gold data set, we would be able to determine the luminosity distance with $sim 16%$ error, which might be useful to determine the nature of the dark energy at high redshift $z > 3$.
Aims. The main purpose of this work is to provide a method to derive tabulated observational constraints on the halo mass function (HMF) by studying the magnification bias effect on high-redshift submillimeter galaxies. Under the assumption of univer sality, we parametrize the HMF according to two traditional models, namely the Sheth and Tormen (ST) and Tinker fits and assess their performance in explaining the measured data within the {Lambda} cold dark matter ({Lambda}CDM) model. We also study the potential influence of the halo occupation distribution (HOD) parameters in this analysis and discuss two important aspects regarding the HMF parametrization. Methods. We measure the cross-correlation function between a foreground sample of GAMA galaxies with redshifts in the range $0.2<z<0.8$ and a background sample of H-ATLAS galaxies with redshifts in the range $1.2<z<4.0$ and carry out an MCMC algorithm to check this observable against its mathematical prediction within the halo model formalism. Results. If all HMF parameters are assumed to be positive, the ST fit only seems to fully explain the measurements by forcing the mean number of satellite galaxies in a halo to increase substantially from its prior mean value. The Tinker fit, on the other hand, provides a robust description of the data without relevant changes in the HOD parameters, but with some dependence on the prior range of two of its parameters. When the normalization condition for the HMF is dropped and we allow negative values of the $p_1$ parameter in the ST fit, all the involved parameters are better determined, unlike the previous models, thus deriving the most general HMF constraints. While all cases are in agreement with the traditional fits within the uncertainties, the last one hints at a slightly higher number of halos at intermediate and high masses, raising the important point of the allowed parameter range.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا