ترغب بنشر مسار تعليمي؟ اضغط هنا

Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction

94   0   0.0 ( 0 )
 نشر من قبل Jeffrey Smith
 تاريخ النشر 2012
والبحث باللغة English




اسأل ChatGPT حول البحث

With the unprecedented photometric precision of the Kepler Spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends and other instrumental signatures, obscure astrophysical signals. The Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed in Kepler data poses a significant problem to standard cotrending methods such as SYSREM and TFA. Variable stars are often of particular astrophysical interest so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian Maximum A Posteriori (MAP) approach where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is in turn used to establish a range of reasonable robust fit parameters. These robust fit parameters are then used to generate a Bayesian Prior and a Bayesian Posterior Probability Distribution Function (PDF) which when maximized finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection which commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves.



قيم البحث

اقرأ أيضاً

Solar flares are large-scale releases of energy in the solar atmosphere, which are characterised by rapid changes in the hydrodynamic properties of plasma from the photosphere to the corona. Solar physicists have typically attempted to understand the se complex events using a combination of theoretical models and observational data. From a statistical perspective, there are many challenges associated with making accurate and statistically significant comparisons between theory and observations, due primarily to the large number of free parameters associated with physical models. This class of ill-posed statistical problem is ideally suited to Bayesian methods. In this paper, the solar flare studied by Raftery et al. (2009) is reanalysed using a Bayesian framework. This enables us to study the evolution of the flares temperature, emission measure and energy loss in a statistically self-consistent manner. The Bayesian-based model selection techniques imply that no decision can be made regarding which of the conductive or non-thermal beam heating play the most important role in heating the flare plasma during the impulsive phase of this event.
In this paper, a Bayesian semiparametric copula approach is used to model the underlying multivariate distribution $F_{true}$. First, the Dirichlet process is constructed on the unknown marginal distributions of $F_{true}$. Then a Gaussian copula mod el is utilized to capture the dependence structure of $F_{true}$. As a result, a Bayesian multivariate normality test is developed by combining the relative belief ratio and the Energy distance. Several interesting theoretical results of the approach are derived. Finally, through several simulated examples and a real data set, the proposed approach reveals excellent performance.
JWST holds great promise in characterizing atmospheres of transiting exoplanets, potentially providing insights into Earth-sized planets within the habitable zones of M dwarf host stars if photon-limited performance can be achieved. Here, we discuss the systematic error sources that are expected to be present in grism time series observations with the NIRCam instrument. We find that pointing jitter and high gain antenna moves on top of the detectors subpixel crosshatch patterns will produce relatively small variations (less than 6 parts per million, ppm). The time-dependent aperture losses due to thermal instabilities in the optics can also be kept to below 2 ppm. To achieve these low noise sources, it is important to employ a sufficiently large (more than 1.1 arcseconds) extraction aperture. Persistence due to charge trapping will have a minor (less than 3 ppm) effect on time series 20 minutes into an exposure and is expected to play a much smaller role than it does for the HST WFC3 detectors. We expect temperature fluctuations to be less than 3 ppm. In total, our estimated noise floor from known systematic error sources is only 9 ppm per visit. We do however urge caution as unknown systematic error sources could be present in flight and will only be measurable on astrophysical sources like quiescent stars. We find that reciprocity failure may introduce a perennial instrument offset at the 40 ppm level, so corrections may be needed when stitching together a multi-instrument multi-observatory spectrum over wide wavelength ranges.
A new Bayesian software package for the analysis of pulsar timing data is presented in the form of TempoNest which allows for the robust determination of the non-linear pulsar timing solution simultaneously with a range of additional stochastic param eters. This includes both red spin noise and dispersion measure variations using either power law descriptions of the noise, or through a model-independent method that parameterises the power at individual frequencies in the signal. We use TempoNest to show that at noise levels representative of current datasets in the European Pulsar Timing Array (EPTA) and International Pulsar Timing Array (IPTA) the linear timing model can underestimate the uncertainties of the timing solution by up to an order of magnitude. We also show how to perform Bayesian model selection between different sets of timing model and stochastic parameters, for example, by demonstrating that in the pulsar B1937+21 both the dispersion measure variations and spin noise in the data are optimally modelled by simple power laws. Finally we show that not including the stochastic parameters simultaneously with the timing model can lead to unpredictable variation in the estimated uncertainties, compromising the robustness of the scientific results extracted from such analysis.
117 - S. Faridani 2017
The short-spacing problem describes the inherent inability of radio-interferometric arrays to measure the integrated flux and structure of diffuse emission associated with extended sources. New interferometric arrays, such as SKA, require solutions t o efficiently combine interferometer and single-dish data. We present a new and open source approach for merging single-dish and cleaned interferometric data sets requiring a minimum of data manipulation while offering a rigid flux determination and full high angular resolution. Our approach combines single-dish and cleaned interferometric data in the image domain. This approach is tested for both Galactic and extragalactic HI data sets. Furthermore, a quantitative comparison of our results to commonly used methods is provided. Additionally, for the interferometric data sets of NGC4214 and NGC5055, we study the impact of different imaging parameters as well as their influence on the combination for NGC4214. The approach does not require the raw data (visibilities) or any additional special information such as antenna patterns. This is advantageous especially in the light of upcoming radio surveys with heterogeneous antenna designs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا