ترغب بنشر مسار تعليمي؟ اضغط هنا

A differentiable N-body code for transit timing and dynamical modeling. I. Algorithm and derivatives

97   0   0.0 ( 0 )
 نشر من قبل Eric Agol
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

When fitting N-body models to astronomical data - including transit times, radial velocity, and astrometric positions at observed times - the derivatives of the model outputs with respect to the initial conditions can help with model optimization and posterior sampling. Here we describe a general-purpose symplectic integrator for arbitrary orbital architectures, including those with close encounters, which we have recast to maintain numerical stability and precision for small step sizes. We compute the derivatives of the N-body coordinates and velocities as a function of time with respect to the initial conditions and masses by propagating the Jacobian along with the N-body integration. For the first time we obtain the derivatives of the transit times with respect to the initial conditions and masses using the chain rule, which is quicker and more accurate than using finite differences or automatic differentiation. We implement this algorithm in an open source package, NbodyGradient.jl, written in the Julia language, which has been used in the optimization and error analysis of transit-timing variations in the TRAPPIST-1 system. We present tests of the accuracy and precision of the code, and show that it compares favorably in speed to other integrators which are written in C.



قيم البحث

اقرأ أيضاً

73 - Hans J. Deeg 2016
Context: Transit or eclipse timing variations have proven to be a valuable tool in exoplanet research. However, no simple way to estimate the potential precision of such timing measures has been presented yet, nor are guidelines available regarding t he relation between timing errors and sampling rate. Aims: A `timing error estimator (TEE) equation is presented that requires only basic transit parameters as input. With the TEE, it is straightforward to estimate timing precisions both for actual data as well as for future instruments, such as the TESS and PLATO space missions. Methods: A derivation of the timing error based on a trapezoidal transit shape is given. We also verify the TEE on realistically modeled transits using Monte Carlo simulations and determine its validity range, exploring in particular the interplay between ingress/egress times and sampling rates. Results: The simulations show that the TEE gives timing errors very close to the correct value, as long as the temporal sampling is faster than transit ingress/egress durations and transits with very low S/N are avoided. Conclusions: The TEE is a useful tool to estimate eclipse or transit timing errors in actual and future data-sets. In combination with an equation to estimate period errors (Deeg 2015), predictions for the ephemeris precision of long-coverage observations are possible as well. The tests for the TEEs validity-range led also to implications for instrumental design: Temporal sampling has to be faster than transit in- or egress durations, or a loss in timing-precision will occur. An application to the TESS mission shows that transits close to its detection limit will have timing uncertainties that exceed 1 hour within a few months after their acquisition. Prompt follow-up observations will be needed to avoid a `loosing of their ephemeris.
Stellar photometric variability and instrumental effects, like cosmic ray hits, data discontinuities, data leaks, instrument aging etc. cause difficulties in the characterization of exoplanets and have an impact on the accuracy and precision of the m odelling and detectability of transits, occultations and phase curves. This paper aims to make an attempt to improve the transit, occultation and phase-curve modelling in the presence of strong stellar variability and instrumental noise. We invoke the wavelet-formulation to reach this goal. We explore the capabilities of the software package Transit and Light Curve Modeller (TLCM). It is able to perform a joint radial velocity and light curve fit or light curve fit only. It models the transit, occultation, beaming, ellipsoidal and reflection effects in the light curves (including the gravity darkening effect, too). The red-noise, the stellar variability and instrumental effects are modelled via wavelets. The wavelet-fit is constrained by prescribing that the final white noise level must be equal to the average of the uncertainties of the photometric data points. This helps to avoid the overfit and regularizes the noise model. The approach was tested by injecting synthetic light curves into Keplers short cadence data and then modelling them. The method performs well over a certain signal-to-noise (S/N) ratio. In general a S/N ratio of 10 is needed to get good results but some parameters requires larger S/N, some others can be retrieved at lower S/Ns. We give limits in terms of signal-to-noise ratio for every studied system parameter which is needed to accurate parameter retrieval. The wavelet-approach is able to manage and to remove the impacts of data discontinuities, cosmic ray events, long-term stellar variability and instrument ageing, short term stellar variability and pulsation and flares among others. (...)
We present a novel, iterative method using an empirical Bayesian approach for modeling the limb darkened WASP-121b transit from the TESS light curve. Our method is motivated by the need to improve $R_{p}/R_{ast}$ estimates for exoplanet atmosphere mo deling, and is particularly effective with the limb darkening (LD) quadratic law requiring no prior central value from stellar atmospheric models. With the non-linear LD law, the method has all the advantages of not needing atmospheric models but does not converge. The iterative method gives a different $R_{p}/R_{ast}$ for WASP-121b at a significance level of 1$sigma$ when compared with existing non-iterative methods. To assess the origins and implications of this difference, we generate and analyze light curves with known values of the limb darkening coefficients (LDCs). We find that non-iterative modeling with LDC priors from stellar atmospheric models results in an inconsistent $R_{p}/R_{ast}$ at 1.5$sigma$ level when the known LDC values are as those previously found when modeling real data by the iterative method. In contrast, the LDC values from the iterative modeling yields the correct value of $R_{p}/R_{ast}$ to within 0.25$sigma$. For more general cases with different known inputs, Monte Carlo simulations show that the iterative method obtains unbiased LDCs and correct $R_{p}/R_{ast}$ to within a significance level of 0.3$sigma$. Biased LDC priors can cause biased LDC posteriors and lead to bias in the $R_{p}/R_{ast}$ of up to 0.82$%$, 2.5$sigma$ for the quadratic law and 0.32$%$, 1.0$sigma$ for the non-linear law. Our improvement in $R_{p}/R_{ast}$ estimation is important when analyzing exoplanet atmospheres.
We present an auto-differentiable spectral modeling of exoplanets and brown dwarfs. This model enables a fully Bayesian inference of the high-dispersion data to fit the ab initio line-by-line spectral computation to the observed spectrum by combining it with the Hamiltonian Monte Carlo in recent probabilistic programming languages. An open source code, exojax, developed in this study, was written in Python using the GPU/TPU compatible package for automatic differentiation and accelerated linear algebra, JAX (Bradbury et al. 2018). We validated the model by comparing it with existing opacity calculators and a radiative transfer code and found reasonable agreements of the output. As a demonstration, we analyzed the high-dispersion spectrum of a nearby brown dwarf, Luhman 16 A and found that a model including water, carbon monoxide, and $mathrm{H_2/He}$ collision induced absorption was well fitted to the observed spectrum ($R=10^5$ and $2.28-2.30 mumathrm{m}$). As a result, we found that $T_0 = 1295 pm 14 mathrm{K}$ at 1 bar and $mathrm{C/O} = 0.62 pm 0.01$, which is slightly higher than the solar value. This work demonstrates the potential of full Bayesian analysis of brown dwarfs and exoplanets as observed by high-dispersion spectrographs and also directly-imaged exoplanets as observed by high-dispersion coronagraphy.
We introduce here our new approach to modeling particle cloud evolution off surface of small bodies (asteroids and comets), following the evolution of ejected particles requires dealing with various time and spatial scales, in an efficient, accurate and modular way. In order to improve computational efficiency and accuracy of such calculations, we created an N-body modeling package as an extension to the increasingly popular orbital dynamics N-body integrator Rebound. Our code is currently a stand-alone variant of the Rebound code and is aimed at advancing a comprehensive understanding of individual particle trajectories, external forcing, and interactions, at the scale which is otherwise overlooked by other modeling approaches. The package we developed -- Rebound Ejecta Dynamics (RED) -- is a Python-based implementation with no additional dependencies. It incorporates several major mechanisms that affect the evolution of particles in low-gravity environments and enables a more straightforward simulation of combined effects. We include variable size and velocity distributions, solar radiation pressure, ellipsoidal gravitational potential, binary or triple asteroid systems, and particle-particle interactions. In this paper, we present a sample of the RED package capabilities. These are applied to a small asteroid binary system (characterized following the Didymos/Dimorphos system, which is the target for NASAs Double Asteroid Redirection Test mission)
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا