ترغب بنشر مسار تعليمي؟ اضغط هنا

Photometry of stars from the K2 extension of NASAs Kepler mission is afflicted by systematic effects caused by small (few-pixel) drifts in the telescope pointing and other spacecraft issues. We present a method for searching K2 light curves for evide nce of exoplanets by simultaneously fitting for these systematics and the transit signals of interest. This method is more computationally expensive than standard search algorithms but we demonstrate that it can be efficiently implemented and used to discover transit signals. We apply this method to the full Campaign 1 dataset and report a list of 36 planet candidates transiting 31 stars, along with an analysis of the pipeline performance and detection efficiency based on artificial signal injections and recoveries. For all planet candidates, we present posterior distributions on the properties of each system based strictly on the transit observables.
New spectroscopic surveys offer the promise of consistent stellar parameters and abundances (stellar labels) for hundreds of thousands of stars in the Milky Way: this poses a formidable spectral modeling challenge. In many cases, there is a sub-set o f reference objects for which the stellar labels are known with high(er) fidelity. We take advantage of this with The Cannon, a new data-driven approach for determining stellar labels from spectroscopic data. The Cannon learns from the known labels of reference stars how the continuum-normalized spectra depend on these labels by fitting a flexible model at each wavelength; then, The Cannon uses this model to derive labels for the remaining survey stars. We illustrate The Cannon by training the model on only 542 stars in 19 clusters as reference objects, with Teff, log g and [Fe/H] as the labels, and then applying it to the spectra of 56,000 stars from APOGEE DR10. The Cannon is very accurate. Its stellar labels compare well to the stars for which APOGEE pipeline (ASPCAP) labels are provided in DR10, with rms differences that are basically identical to the stated ASPCAP uncertainties. Beyond the reference labels, The Cannon makes no use of stellar models nor any line-list, but needs a set of reference objects that span label-space. The Cannon performs well at lower signal-to-noise, as it delivers comparably good labels even at one ninth the APOGEE observing time. We discuss the limitations of The Cannon and its future potential, particularly, to bring different spectroscopic surveys onto a consistent scale of stellar labels.
91 - Rob Fergus 2014
High dynamic-range imagers aim to block out or null light from a very bright primary star to make it possible to detect and measure far fainter companions; in real systems a small fraction of the primary light is scattered, diffracted, and unocculted . We introduce S4, a flexible data-driven model for the unocculted (and highly speckled) light in the P1640 spectroscopic coronograph. The model uses Principal Components Analysis (PCA) to capture the spatial structure and wavelength dependence of the speckles but not the signal produced by any companion. Consequently, the residual typically includes the companion signal. The companion can thus be found by filtering this error signal with a fixed companion model. The approach is sensitive to companions that are of order a percent of the brightness of the speckles, or up to $10^{-7}$ times the brightness of the primary star. This outperforms existing methods by a factor of 2-3 and is close to the shot-noise physical limit.
No true extrasolar Earth analog is known. Hundreds of planets have been found around Sun-like stars that are either Earth-sized but on shorter periods, or else on year-long orbits but somewhat larger. Under strong assumptions, exoplanet catalogs have been used to make an extrapolated estimate of the rate at which Sun-like stars host Earth analogs. These studies are complicated by the fact that every catalog is censored by non-trivial selection effects and detection efficiencies, and every property (period, radius, etc.) is measured noisily. Here we present a general hierarchical probabilistic framework for making justified inferences about the population of exoplanets, taking into account survey completeness and, for the first time, observational uncertainties. We are able to make fewer assumptions about the distribution than previous studies; we only require that the occurrence rate density be a smooth function of period and radius (employing a Gaussian process). By applying our method to synthetic catalogs, we demonstrate that it produces more accurate estimates of the whole population than standard procedures based on weighting by inverse detection efficiency. We apply the method to an existing catalog of small planet candidates around G dwarf stars (Petigura et al. 2013). We confirm a previous result that the radius distribution changes slope near Earths radius. We find that the rate density of Earth analogs is about 0.02 (per star per natural logarithmic bin in period and radius) with large uncertainty. This number is much smaller than previous estimates made with the same data but stronger assumptions.
We present a new method for constraining the Milky Way halo gravitational potential by simultaneously fitting multiple tidal streams. This method requires full three-dimensional positions and velocities for all stars to be fit, but does not require i dentification of any specific stream or determination of stream membership for any star. We exploit the principle that the action distribution of stream stars is most clustered when the potential used to calculate the actions is closest to the true potential. Clustering is quantified with the Kullback-Leibler Divergence (KLD), which also provides conditional uncertainties for our parameter estimates. We show, for toy Gaia-like data in a spherical isochrone potential, that maximizing the KLD of the action distribution relative to a smoother distribution recovers the true values of the potential parameters. The precision depends on the observational errors and the number of streams in the sample; using KIII giants as tracers, we measure the enclosed mass at the average radius of the sample stars accurate to 3% and precise to 20-40%. Recovery of the scale radius is precise to 25%, and is biased 50% high by the small galactocentric distance range of stars in our mock sample (1-25 kpc, or about three scale radii, with mean 6.5 kpc). About 15 streams, with at least 100 stars per stream, are needed to obtain upper and lower bounds on the enclosed mass and scale radius when observational errors are taken into account; 20-25 streams are required to stabilize the size of the confidence interval. If radial velocities are provided for stars out to 100 kpc (10 scale radii), all parameters can be determined with 10% accuracy and 20% precision (1.3% accuracy in the case of the enclosed mass), underlining the need for ground-based spectroscopic follow-up to complete the radial velocity catalog for faint halo stars observed by Gaia.
The fully marginalized likelihood, or Bayesian evidence, is of great importance in probabilistic data analysis, because it is involved in calculating the posterior probability of a model or re-weighting a mixture of models conditioned on data. It is, however, extremely challenging to compute. This paper presents a geometric-path Monte Carlo method, inspired by multi-canonical Monte Carlo to evaluate the fully marginalized likelihood. We show that the algorithm is very fast and easy to implement and produces a justified uncertainty estimate on the fully marginalized likelihood. The algorithm performs efficiently on a trial problem and multi-companion model fitting for radial velocity data. For the trial problem, the algorithm returns the correct fully marginalized likelihood, and the estimated uncertainty is also consistent with the standard deviation of results from multiple runs. We apply the algorithm to the problem of fitting radial velocity data from HIP 88048 ($ u$ Oph) and Gliese 581. We evaluate the fully marginalized likelihood of 1, 2, 3, and 4-companion models given data from HIP 88048 and various choices of prior distributions. We consider prior distributions with three different minimum radial velocity amplitude $K_{mathrm{min}}$. Under all three priors, the 2-companion model has the largest marginalized likelihood, but the detailed values depend strongly on $K_{mathrm{min}}$. We also evaluate the fully marginalized likelihood of 3, 4, 5, and 6-planet model given data from Gliese 581 and find that the fully marginalized likelihood of the 5-planet model is too close to that of the 6-planet model for us to confidently decide between them.
57 - David W. Hogg 2013
Keplers immense photometric precision to date was maintained through satellite stability and precise pointing. In this white paper, we argue that image modeling--fitting the Kepler-downlinked raw pixel data--can vastly improve the precision of Kepler in pointing-degraded two-wheel mode. We argue that a non-trivial modeling effort may permit continuance of photometry at 10-ppm-level precision. We demonstrate some baby steps towards precise models in both data-driven (flexible) and physics-driven (interpretably parameterized) modes. We demonstrate that the expected drift or jitter in positions in the two-weel era will help with constraining calibration parameters. In particular, we show that we can infer the device flat-field at higher than pixel resolution; that is, we can infer pixel-to-pixel variations in intra-pixel sensitivity. These results are relevant to almost any scientific goal for the repurposed mission; image modeling ought to be a part of any two-wheel repurpose for the satellite. We make other recommendations for Kepler operations, but fundamentally advocate that the project stick with its core mission of finding and characterizing Earth analogs. [abridged]
We introduce a stable, well tested Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by Goodman & Weare (2010). The code is open source and has already been used in several published projects in the astrophysics literature. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and it has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One major advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to $sim N^2$ for a traditional algorithm in an N-dimensional parameter space. In this document, we describe the algorithm and the details of our implementation and API. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort. The code is available online at http://dan.iel.fm/emcee under the MIT License.
We present the SDSS-XDQSO quasar targeting catalog for efficient flux-based quasar target selection down to the faint limit of the Sloan Digital Sky Survey (SDSS) catalog, even at medium redshifts (2.5 <~ z <~ 3) where the stellar contamination is si gnificant. We build models of the distributions of stars and quasars in flux space down to the flux limit by applying the extreme-deconvolution method to estimate the underlying density. We convolve this density with the flux uncertainties when evaluating the probability that an object is a quasar. This approach results in a targeting algorithm that is more principled, more efficient, and faster than other similar methods. We apply the algorithm to derive low-redshift (z < 2.2), medium-redshift (2.2 <= z <= 3.5), and high-redshift (z > 3.5) quasar probabilities for all 160,904,060 point sources with dereddened i-band magnitude between 17.75 and 22.45 mag in the 14,555 deg^2 of imaging from SDSS Data Release 8. The catalog can be used to define a uniformly selected and efficient low- or medium-redshift quasar survey, such as that needed for the SDSS-IIIs Baryon Oscillation Spectroscopic Survey project. We show that the XDQSO technique performs as well as the current best photometric quasar-selection technique at low redshift, and outperforms all other flux-based methods for selecting the medium-redshift quasars of our primary interest. We make code to reproduce the XDQSO quasar target selection publicly available.
We develop a technique to investigate the possibility that some of the recently discovered ultra-faint dwarf satellites of the Milky Way might be cusp caustics rather than gravitationally self-bound systems. Such cusps can form when a stream of stars folds, creating a region where the projected 2-D surface density is enhanced. In this work, we construct a Poisson maximum likelihood test to compare the cusp and exponential models of any substructure on an equal footing. We apply the test to the Hercules dwarf (d ~ 113 kpc, M_V ~ -6.2, e ~ 0.67). The flattened exponential model is strongly favored over the cusp model in the case of Hercules, ruling out at high confidence that Hercules is a cusp catastrophe. This test can be applied to any of the Milky Way dwarfs, and more generally to the entire stellar halo population, to search for the cusp catastrophes that might be expected in an accreted stellar halo.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا