Do you want to publish a course? Click here

Automating Inference of Binary Microlensing Events with Neural Density Estimation

82   0   0.0 ( 0 )
 Added by Keming Zhang
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Automated inference of binary microlensing events with traditional sampling-based algorithms such as MCMC has been hampered by the slowness of the physical forward model and the pathological likelihood surface. Current analysis of such events requires both expert knowledge and large-scale grid searches to locate the approximate solution as a prerequisite to MCMC posterior sampling. As the next generation, space-based microlensing survey with the Roman Space Observatory is expected to yield thousands of binary microlensing events, a new scalable and automated approach is desired. Here, we present an automated inference method based on neural density estimation (NDE). We show that the NDE trained on simulated Roman data not only produces fast, accurate, and precise posteriors but also captures expected posterior degeneracies. A hybrid NDE-MCMC framework can further be applied to produce the exact posterior.



rate research

Read More

Fast and automated inference of binary-lens, single-source (2L1S) microlensing events with sampling-based Bayesian algorithms (e.g., Markov Chain Monte Carlo; MCMC) is challenged on two fronts: high computational cost of likelihood evaluations with microlensing simulation codes, and a pathological parameter space where the negative-log-likelihood surface can contain a multitude of local minima that are narrow and deep. Analysis of 2L1S events usually involves grid searches over some parameters to locate approximate solutions as a prerequisite to posterior sampling, an expensive process that often requires human-in-the-loop domain expertise. As the next-generation, space-based microlensing survey with the Roman Space Telescope is expected to yield thousands of binary microlensing events, a new fast and automated method is desirable. Here, we present a likelihood-free inference (LFI) approach named amortized neural posterior estimation, where a neural density estimator (NDE) learns a surrogate posterior $hat{p}(theta|x)$ as an observation-parametrized conditional probability distribution, from pre-computed simulations over the full prior space. Trained on 291,012 simulated Roman-like 2L1S simulations, the NDE produces accurate and precise posteriors within seconds for any observation within the prior support without requiring a domain expert in the loop, thus allowing for real-time and automated inference. We show that the NDE also captures expected posterior degeneracies. The NDE posterior could then be refined into the exact posterior with a downstream MCMC sampler with minimal burn-in steps.
We introduce MulensModel, a software package for gravitational microlensing modeling. The package provides a framework for calculating microlensing model magnification curves and goodness-of-fit statistics for microlensing events with single and binary lenses as well as a variety of higher-order effects: extended sources with limb-darkening, annual microlensing parallax, satellite microlensing parallax, and binary lens orbital motion. The software could also be used for analysis of the planned microlensing survey by the NASA flag-ship WFIRST satellite. MulensModel is available at https://github.com/rpoleski/MulensModel/.
Modern surveys of gravitational microlensing events have progressed to detecting thousands per year. Surveys are capable of probing Galactic structure, stellar evolution, lens populations, black hole physics, and the nature of dark matter. One of the key avenues for doing this is studying the microlensing Einstein radius crossing time distribution ($t_E$). However, systematics in individual light curves as well as over-simplistic modeling can lead to biased results. To address this, we developed a model to simultaneously handle the microlensing parallax due to Earths motion, systematic instrumental effects, and unlensed stellar variability with a Gaussian Process model. We used light curves for nearly 10,000 OGLE-III and IV Milky Way bulge microlensing events and fit each with our model. We also developed a forward model approach to infer the timescale distribution by forward modeling from the data rather than using point estimates from individual events. We find that modeling the variability in the baseline removes a source of significant bias in individual events, and previous analyses over-estimated the number of long timescale ($t_E>100$ days) events due to their over simplistic models ignoring parallax effects and stellar variability. We use our fits to identify hundreds of events that are likely black holes.
The light received by source stars in microlensing events may be significantly polarized if both an efficient photon scattering mechanism is active in the source stellar atmosphere and a differential magnification is therein induced by the lensing system. The best candidate events for observing polarization are highly magnified events with source stars belonging to the class of cool, giant stars {in which the stellar light is polarized by photon scattering on dust grains contained in their envelopes. The presence in the stellar atmosphere of an internal cavity devoid of dust produces polarization profiles with a two peaks structure. Hence, the time interval between them gives an important observable quantity directly related to the size of the internal cavity and to the model parameters of the lens system.} We show that {during a microlensing event} the expected polarization variability can solve an ambiguity, that arises in some cases, related to the binary or planetary lensing interpretation of the perturbations observed near the maximum of the event light-curve. We consider a specific event case for which the parameter values corresponding to the two solutions are given. Then, assuming a polarization model for the source star, we compute the two expected polarization profiles. The position of the two peaks appearing in the polarization curves and the characteristic time interval between them allow us to distinguish between the binary and planetary lens solutions.
80 - M. Lafarga , I. Ribas , C. Lovis 2020
For years, the standard procedure to measure radial velocities (RVs) of spectral observations consisted in cross-correlating the spectra with a binary mask, that is, a simple stellar template that contains information on the position and strength of stellar absorption lines. The cross-correlation function (CCF) profiles also provide several indicators of stellar activity. We present a methodology to first build weighted binary masks and, second, to compute the CCF of spectral observations with these masks from which we derive radial velocities and activity indicators. These methods are implemented in a python code that is publicly available. To build the masks, we selected a large number of sharp absorption lines based on the profile of the minima present in high signal-to-noise ratio (S/N) spectrum templates built from observations of reference stars. We computed the CCFs of observed spectra and derived RVs and the following three standard activity indicators: full-width-at-half-maximum as well as contrast and bisector inverse slope.We applied our methodology to CARMENES high-resolution spectra and obtain RV and activity indicator time series of more than 300 M dwarf stars observed for the main CARMENES survey. Compared with the standard CARMENES template matching pipeline, in general we obtain more precise RVs in the cases where the template used in the standard pipeline did not have enough S/N. We also show the behaviour of the three activity indicators for the active star YZ CMi and estimate the absolute RV of the M dwarfs analysed using the CCF RVs.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا