Do you want to publish a course? Click here

Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

116   0   0.0 ( 0 )
 Added by Nicolas Tejos
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose: (i) that the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions; and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) recovered redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in an efficient and robust manner. Particularly, SORT redshifts are able to recover the distinctive features of the cosmic web and can provide unbiased measurement of the two-point correlation function on scales > 4 Mpc/h. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

rate research

Read More

High redshift star-forming galaxies are discovered routinely through a flux excess in narrowband filters (NB) caused by an emission line. In most cases, the width of such filters is broad compared to typical line widths, and the throughput of the filters varies substantially within the bandpass. This leads to substantial uncertainties in redshifts and fluxes that are derived from the observations with one specific NB. In this work we demonstrate that the uncertainty in measured line parameters can be sharply reduced by using repeated observations of the same target field with filters that have slightly different transmittance curves. Such data are routinely collected with some large field imaging cameras that use multiple detectors and a separate filter for each of the detectors. An example is the NB118 data from ESOs VISTA InfraRed CAMera (VIRCAM). We carefully developed and characterized this method to determine more accurate redshift and line flux estimates from the ratio of apparent fluxes measured from observations in different narrowband filters and several matching broadband filters. Then, we tested the obtainable quality of parameter estimation both on simulated and actual observations for the example of Ha in the VIRCAM NB118 filters combined with broadband data in Y, J, H. We find that by using this method, the errors in the measured lines fluxes can be reduced up to almost an order of magnitude and that an accuracy in wavelength of better than 1nm can be achieved with the ~13nm wide NB118 filters.
We show that the ratio of galaxies specific star formation rates (SSFRs) to their host halos specific mass accretion rates (SMARs) strongly constrains how the galaxies stellar masses, specific star formation rates, and host halo masses evolve over cosmic time. This evolutionary constraint provides a simple way to probe z>8 galaxy populations without direct observations. Tests of the method with galaxy properties at z=4 successfully reproduce the known evolution of the stellar mass--halo mass (SMHM) relation, galaxy SSFRs, and the cosmic star formation rate (CSFR) for 5<z<8. We then predict the continued evolution of these properties for 8<z<15. In contrast to the non-evolution in the SMHM relation at z<4, the median galaxy mass at fixed halo mass increases strongly at z>4. We show that this result is closely linked to the flattening in galaxy SSFRs at z>2 compared to halo specific mass accretion rates; we expect that average galaxy SSFRs at fixed stellar mass will continue their mild evolution to z~15. The expected CSFR shows no breaks or features at z>8.5; this constrains both reionization and the possibility of a steep falloff in the CSFR at z=9-10. Finally, we make predictions for stellar mass and luminosity functions for the James Webb Space Telescope (JWST), which should be able to observe one galaxy with M* > ~10^8 Msun per 10^3 Mpc^3 at z=9.6 and one such galaxy per 10^4 Mpc^3 at z=15.
The purpose of this work is to investigate the prospects of using the future standard siren data without redshift measurements to constrain cosmological parameters. With successful detections of gravitational wave (GW) signals an era of GW astronomy has begun. Unlike the electromagnetic domain, GW signals allow direct measurements of luminosity distances to the sources, while their redshifts remain to be measured by identifying electromagnetic counterparts. This leads to significant technical problems for almost all possible BH-BH systems. It is the major obstacle to cosmological applications of GW standard sirens. In this paper, we introduce the general framework of using luminosity distances alone for cosmological inference. The idea is to use the prior knowledge of the redshift probability distribution for coalescing sources from the intrinsic merger rates assessed with population synthesis codes. Then the posterior probability distributions for cosmological parameters can be calculated. We demonstrate the performance of our method on the simulated mock data and show that the luminosity distance measurement would enable an accurate determination of cosmological parameters up to $20%$ uncertainty level. We also find that in order to infer $H_0$ to 1% level with flat $Lambda$CDM model, we need about $10^5$ events.
Dense retrieval has been shown to be effective for retrieving relevant documents for Open Domain QA, surpassing popular sparse retrieval methods like BM25. REALM (Guu et al., 2020) is an end-to-end dense retrieval system that relies on MLM based pretraining for improved downstream QA efficiency across multiple datasets. We study the finetuning of REALM on various QA tasks and explore the limits of various hyperparameter and supervision choices. We find that REALM was significantly undertrained when finetuning and simple improvements in the training, supervision, and inference setups can significantly benefit QA results and exceed the performance of other models published post it. Our best model, REALM++, incorporates all the best working findings and achieves significant QA accuracy improvements over baselines (~5.5% absolute accuracy) without any model design changes. Additionally, REALM++ matches the performance of large Open Domain QA models which have 3x more parameters demonstrating the efficiency of the setup.
We demonstrate the ability of convolutional neural networks (CNNs) to mitigate systematics in the virial scaling relation and produce dynamical mass estimates of galaxy clusters with remarkably low bias and scatter. We present two models, CNN$_mathrm{1D}$ and CNN$_mathrm{2D}$, which leverage this deep learning tool to infer cluster masses from distributions of member galaxy dynamics. Our first model, CNN$_text{1D}$, infers cluster mass directly from the distribution of member galaxy line-of-sight velocities. Our second model, CNN$_text{2D}$, extends the input space of CNN$_text{1D}$ to learn on the joint distribution of galaxy line-of-sight velocities and projected radial distances. We train each model as a regression over cluster mass using a labeled catalog of realistic mock cluster observations generated from the MultiDark simulation and UniverseMachine catalog. We then evaluate the performance of each model on an independent set of mock observations selected from the same simulated catalog. The CNN models produce cluster mass predictions with lognormal residuals of scatter as low as $0.132$ dex, greater than a factor of 2 improvement over the classical $M$-$sigma$ power-law estimator. Furthermore, the CNN model reduces prediction scatter relative to similar machine learning approaches by up to $17%$ while executing in drastically shorter training and evaluation times (by a factor of 30) and producing considerably more robust mass predictions (improving prediction stability under variations in galaxy sampling rate by $30%$).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا