ترغب بنشر مسار تعليمي؟ اضغط هنا

SkyPy is an open-source Python package for simulating the astrophysical sky. It comprises a library of physical and empirical models across a range of observables and a command-line script to run end-to-end simulations. The library provides functions that sample realisations of sources and their associated properties from probability distributions. Simulation pipelines are constructed from these models using a YAML-based configuration syntax, while task scheduling and data dependencies are handled internally and the modular design allows users to interface with external software. SkyPy is developed and maintained by a diverse community of domain experts with a focus on software sustainability and interoperability. By fostering development, it provides a framework for correlated simulations of a range of cosmological probes including galaxy populations, large scale structure, the cosmic microwave background, supernovae and gravitational waves. Version 0.4 implements functions that model various properties of galaxies including luminosity functions, redshift distributions and optical photometry from spectral energy distribution templates. Future releases will provide additional modules, for example, to simulate populations of dark matter halos and model the galaxy-halo connection, making use of existing software packages from the astrophysics community where appropriate.
The dominant uncertainty in the current measurement of the Hubble constant ($H_0$) with strong gravitational lensing time delays is attributed to uncertainties in the mass profiles of the main deflector galaxies. Strongly lensed supernovae (glSNe) ca n provide, in addition to measurable time delays, lensing magnification constraints when knowledge about the unlensed apparent brightness of the explosion is imposed. We present a hierarchical Bayesian framework to combine a dataset of SNe that are not strongly lensed and a dataset of strongly lensed SNe with measured time delays. We jointly constrain (i) $H_0$ using the time delays as an absolute distance indicator, (ii) the lens model profiles using the magnification ratio of lensed and unlensed fluxes on the population level and (iii) the unlensed apparent magnitude distribution of the SNe population and the redshift-luminosity relation of the relative expansion history of the Universe. We apply our joint inference framework on a future expected data set of glSNe, and forecast that a sample of 144 glSNe of Type~Ia with well measured time series and imaging data will measure $H_0$ to 1.5%. We discuss strategies to mitigate systematics associated with using absolute flux measurements of glSNe to constrain the mass density profiles. Using the magnification of SNe images is a promising and complementary alternative to using stellar kinematics. Future surveys, such as the Rubin and textit{Roman} observatories, will be able to discover the necessary number of glSNe, and with additional follow-up observations this methodology will provide precise constraints on mass profiles and $H_0$.
lenstronomy is an Astropy-affiliated Python package for gravitational lensing simulations and analyses. lenstronomy was introduced by Birrer and Amara (2018) and is based on the linear basis set approach by Birrer et a. (2015). The user and developer base of lenstronomy has substantially grown since then, and the software has become an integral part of a wide range of recent analyses, such as measuring the Hubble constant with time-delay strong lensing or constraining the nature of dark matter from resolved and unresolved small scale lensing distortion statistics. The modular design has allowed the community to incorporate innovative new methods, as well as to develop enhanced software and wrappers with more specific aims on top of the lenstronomy API. Through community engagement and involvement, lenstronomy has become a foundation of an ecosystem of affiliated packages extending the original scope of the software and proving its robustness and applicability at the forefront of the strong gravitational lensing community in an open source and reproducible manner.
Strongly lensed explosive transients such as supernovae, gamma-ray bursts, fast radio bursts, and gravitational waves are very promising tools to determine the Hubble constant ($H_0$) in the near future in addition to strongly lensed quasars. In this work, we show that the transient nature of the point source provides an advantage over quasars: the lensed host galaxy can be observed before or after the transients appearance. Therefore, the lens model can be derived from images free of contamination from bright point sources. We quantify this advantage by comparing the precision of a lens model obtained from the same lenses with and without point sources. Based on Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) observations with the same sets of lensing parameters, we simulate realistic mock datasets of 48 quasar lensing systems (i.e., adding AGN in the galaxy center) and 48 galaxy-galaxy lensing systems (assuming the transient source is not visible but the time delay and image positions have been or will be measured). We then model the images and compare the inferences of the lens model parameters and $H_0$. We find that the precision of the lens models (in terms of the deflector mass slope) is better by a factor of 4.1 for the sample without lensed point sources, resulting in an increase of $H_0$ precision by a factor of 2.9. The opportunity to observe the lens systems without the transient point sources provides an additional advantage for time-delay cosmography over lensed quasars. It facilitates the determination of higher signal-to-noise stellar kinematics of the main deflector, and thus its mass density profile, which in turn plays a key role in breaking the mass-sheet degeneracy and constraining $H_0$.
Automated searches for strong gravitational lensing in optical imaging survey datasets often employ machine learning and deep learning approaches. These techniques require more example systems to train the algorithms than have presently been discover ed, which creates a need for simulated images as training dataset supplements. This work introduces and summarizes deeplenstronomy, an open-source Python package that enables efficient, large-scale, and reproducible simulation of images of astronomical systems. A full suite of unit tests, documentation, and example notebooks are available at https://deepskies.github.io/deeplenstronomy/ .
Joint analyses of small-scale cosmological structure probes are relatively unexplored and promise to advance measurements of microphysical dark matter properties using heterogeneous data. Here, we present a multidimensional analysis of dark matter su bstructure using strong gravitational lenses and the Milky Way (MW) satellite galaxy population, accounting for degeneracies in model predictions and using covariances in the constraining power of these individual probes for the first time. We simultaneously infer the projected subhalo number density and the half-mode mass describing the suppression of the subhalo mass function in thermal relic warm dark matter (WDM), $M_{mathrm{hm}}$, using the semianalytic model $mathrm{texttt{Galacticus}}$ to connect the subhalo population inferred from MW satellite observations to the strong lensing host halo mass and redshift regime. Combining MW satellite and strong lensing posteriors in this parameter space yields $M_{mathrm{hm}}<10^{7.0} M_{mathrm{odot}}$ (WDM particle mass $m_{mathrm{WDM}}>9.7 mathrm{keV}$) at $95%$ confidence and disfavors $M_{mathrm{hm}}=10^{7.4} M_{mathrm{odot}}$ ($m_{mathrm{WDM}}=7.4 mathrm{keV}$) with a 20:1 marginal likelihood ratio, improving limits on $m_{mathrm{WDM}}$ set by the two methods independently by $sim 30%$. These results are marginalized over the line-of-sight contribution to the strong lensing signal, the mass of the MW host halo, and the efficiency of subhalo disruption due to baryons and are robust to differences in the disruption efficiency between the MW and strong lensing regimes at the $sim 10%$ level. This work paves the way for unified analyses of next-generation small-scale structure measurements covering a wide range of scales and redshifts.
We investigate the use of approximate Bayesian neural networks (BNNs) in modeling hundreds of time-delay gravitational lenses for Hubble constant ($H_0$) determination. Our BNN was trained on synthetic HST-quality images of strongly lensed active gal actic nuclei (AGN) with lens galaxy light included. The BNN can accurately characterize the posterior PDFs of model parameters governing the elliptical power-law mass profile in an external shear field. We then propagate the BNN-inferred posterior PDFs into ensemble $H_0$ inference, using simulated time delay measurements from a plausible dedicated monitoring campaign. Assuming well-measured time delays and a reasonable set of priors on the environment of the lens, we achieve a median precision of $9.3$% per lens in the inferred $H_0$. A simple combination of 200 test-set lenses results in a precision of 0.5 $textrm{km s}^{-1} textrm{ Mpc}^{-1}$ ($0.7%$), with no detectable bias in this $H_0$ recovery test. The computation time for the entire pipeline -- including the training set generation, BNN training, and $H_0$ inference -- translates to 9 minutes per lens on average for 200 lenses and converges to 6 minutes per lens as the sample size is increased. Being fully automated and efficient, our pipeline is a promising tool for exploring ensemble-level systematics in lens modeling for $H_0$ inference.
In the past few years, approximate Bayesian Neural Networks (BNNs) have demonstrated the ability to produce statistically consistent posteriors on a wide range of inference problems at unprecedented speed and scale. However, any disconnect between tr aining sets and the distribution of real-world objects can introduce bias when BNNs are applied to data. This is a common challenge in astrophysics and cosmology, where the unknown distribution of objects in our Universe is often the science goal. In this work, we incorporate BNNs with flexible posterior parameterizations into a hierarchical inference framework that allows for the reconstruction of population hyperparameters and removes the bias introduced by the training distribution. We focus on the challenge of producing posterior PDFs for strong gravitational lens mass model parameters given Hubble Space Telescope (HST) quality single-filter, lens-subtracted, synthetic imaging data. We show that the posterior PDFs are sufficiently accurate (i.e., statistically consistent with the truth) across a wide variety of power-law elliptical lens mass distributions. We then apply our approach to test data sets whose lens parameters are drawn from distributions that are drastically different from the training set. We show that our hierarchical inference framework mitigates the bias introduced by an unrepresentative training sets interim prior. Simultaneously, given a sufficiently broad training set, we can precisely reconstruct the population hyperparameters governing our test distributions. Our full pipeline, from training to hierarchical inference on thousands of lenses, can be run in a day. The framework presented here will allow us to efficiently exploit the full constraining power of future ground- and space-based surveys.
One of the main challenges in using high redshift active galactic nuclei to study the correlations between the mass of the supermassive Black Hole (MBH) and the properties of their active host galaxies is instrumental resolution. Strong lensing magni fication effectively increases instrumental resolution and thus helps to address this challenge. In this work, we study eight strongly lensed active galactic nuclei (AGN) with deep Hubble Space Telescope imaging, using the lens modelling code Lenstronomy to reconstruct the image of the source. Using the reconstructed brightness of the host galaxy, we infer the host galaxy stellar mass based on stellar population models. MBH are estimated from broad emission lines using standard methods. Our results are in good agreement with recent work based on non-lensed AGN, demonstrating the potential of using strongly lensed AGNs to extend the study of the correlations to higher redshifts. At the moment, the sample size of lensed AGN is small and thus they provide mostly a consistency check on systematic errors related to resolution for the non-lensed AGN. However, the number of known lensed AGN is expected to increase dramatically in the next few years, through dedicated searches in ground and space based wide field surveys, and they may become a key diagnostic of black hole and galaxy co-evolution.
132 - Tao Yang , Simon Birrer , Bin Hu 2020
Strong gravitational lensing has been a powerful probe of cosmological models and gravity. To date, constraints in either domain have been obtained separately. We propose a new methodology through which the cosmological model, specifically the Hubble constant, and post-Newtonian parameter can be simultaneously constrained. Using the time-delay cosmography from strong lensing combined with the stellar kinematics of the deflector lens, we demonstrate the Hubble constant and post-Newtonian parameter are incorporated in two distance ratios which reflect the lensing mass and dynamical mass, respectively. Through the reanalysis of the four publicly released lenses distance posteriors from the H0LiCOW collaboration, the simultaneous constraints of Hubble constant and post-Newtonian parameter are obtained. Our results suggests no deviation from the General Relativity, $gamma_{texttt{PPN}}=0.87^{+0.19}_{-0.17}$ with a Hubble constant favors the local Universe value, $H_0=73.65^{+1.95}_{-2.26}$ km s$^{-1}$ Mpc$^{-1}$. Finally, we forecast the robustness of gravity tests by using the time-delay strong lensing for constraints we expect in the next few years. We find that the joint constraint from 40 lenses are able to reach the order of $7.7%$ for the post-Newtonian parameter and $1.4%$ for Hubble constant.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا