ترغب بنشر مسار تعليمي؟ اضغط هنا

Modeling microlensing events with MulensModel

93   0   0.0 ( 0 )
 نشر من قبل Rados{\\l}aw Poleski
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce MulensModel, a software package for gravitational microlensing modeling. The package provides a framework for calculating microlensing model magnification curves and goodness-of-fit statistics for microlensing events with single and binary lenses as well as a variety of higher-order effects: extended sources with limb-darkening, annual microlensing parallax, satellite microlensing parallax, and binary lens orbital motion. The software could also be used for analysis of the planned microlensing survey by the NASA flag-ship WFIRST satellite. MulensModel is available at https://github.com/rpoleski/MulensModel/.



قيم البحث

اقرأ أيضاً

Automated inference of binary microlensing events with traditional sampling-based algorithms such as MCMC has been hampered by the slowness of the physical forward model and the pathological likelihood surface. Current analysis of such events require s both expert knowledge and large-scale grid searches to locate the approximate solution as a prerequisite to MCMC posterior sampling. As the next generation, space-based microlensing survey with the Roman Space Observatory is expected to yield thousands of binary microlensing events, a new scalable and automated approach is desired. Here, we present an automated inference method based on neural density estimation (NDE). We show that the NDE trained on simulated Roman data not only produces fast, accurate, and precise posteriors but also captures expected posterior degeneracies. A hybrid NDE-MCMC framework can further be applied to produce the exact posterior.
Modern surveys of gravitational microlensing events have progressed to detecting thousands per year. Surveys are capable of probing Galactic structure, stellar evolution, lens populations, black hole physics, and the nature of dark matter. One of the key avenues for doing this is studying the microlensing Einstein radius crossing time distribution ($t_E$). However, systematics in individual light curves as well as over-simplistic modeling can lead to biased results. To address this, we developed a model to simultaneously handle the microlensing parallax due to Earths motion, systematic instrumental effects, and unlensed stellar variability with a Gaussian Process model. We used light curves for nearly 10,000 OGLE-III and IV Milky Way bulge microlensing events and fit each with our model. We also developed a forward model approach to infer the timescale distribution by forward modeling from the data rather than using point estimates from individual events. We find that modeling the variability in the baseline removes a source of significant bias in individual events, and previous analyses over-estimated the number of long timescale ($t_E>100$ days) events due to their over simplistic models ignoring parallax effects and stellar variability. We use our fits to identify hundreds of events that are likely black holes.
Fast and automated inference of binary-lens, single-source (2L1S) microlensing events with sampling-based Bayesian algorithms (e.g., Markov Chain Monte Carlo; MCMC) is challenged on two fronts: high computational cost of likelihood evaluations with m icrolensing simulation codes, and a pathological parameter space where the negative-log-likelihood surface can contain a multitude of local minima that are narrow and deep. Analysis of 2L1S events usually involves grid searches over some parameters to locate approximate solutions as a prerequisite to posterior sampling, an expensive process that often requires human-in-the-loop domain expertise. As the next-generation, space-based microlensing survey with the Roman Space Telescope is expected to yield thousands of binary microlensing events, a new fast and automated method is desirable. Here, we present a likelihood-free inference (LFI) approach named amortized neural posterior estimation, where a neural density estimator (NDE) learns a surrogate posterior $hat{p}(theta|x)$ as an observation-parametrized conditional probability distribution, from pre-computed simulations over the full prior space. Trained on 291,012 simulated Roman-like 2L1S simulations, the NDE produces accurate and precise posteriors within seconds for any observation within the prior support without requiring a domain expert in the loop, thus allowing for real-time and automated inference. We show that the NDE also captures expected posterior degeneracies. The NDE posterior could then be refined into the exact posterior with a downstream MCMC sampler with minimal burn-in steps.
The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large datasets. Gaussian Processes are a popular class of models used for this purpose but, since th e computational cost scales, in general, as the cube of the number of data points, their application has been limited to small datasets. In this paper, we present a novel method for Gaussian Process modeling in one-dimension where the computational requirements scale linearly with the size of the dataset. We demonstrate the method by applying it to simulated and real astronomical time series datasets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically-driven damped harmonic oscillators -- providing a physical motivation for and interpretation of this choice -- but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable Gaussian Process methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
Microlensing events provide a unique capacity to study the stellar remnant population of the Galaxy. Optical microlensing suffers from a near complete degeneracy between the mass, the velocity and the distance. However, a subpopulation of lensed star s, Mira variable stars, are also radio bright, exhibiting strong SiO masers. These are sufficiently bright and compact to permit direct imaging using existing very long baseline interferometers such as the Very Long Baseline Array (VLBA). We show that these events are relatively common, occurring at a rate of $approx 2~{rm yr^{ -1}}$ of which $0.1~{rm yr^{-1}}$ are associated with Galactic black holes. Features in the associated images, e.g., the Einstein ring, are sufficiently well resolved to fully reconstruct the lens properties, enabling the measurement of mass, distance, and tangential velocity of the lensing object to a precision better than 15%. Future radio microlensing surveys conducted with upcoming radio telescopes combined with modest improvements in the VLBA could increase the rate of Galactic black hole events to roughly 10~${rm yr}^{-1}$, sufficient to double the number of known stellar mass black holes in a couple years, and permitting the construction of distribution functions of stellar mass black hole properties.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا