ترغب بنشر مسار تعليمي؟ اضغط هنا

The Importance of Telescope Training in Data Interpretation

72   0   0.0 ( 0 )
 نشر من قبل David Whelan
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English
 تأليف D. G. Whelan




اسأل ChatGPT حول البحث

In this State of the Profession Consideration, we will discuss the state of hands-on observing within the profession, including: information about professional observing trends; student telescope training, beginning at the undergraduate and graduate levels, as a key to ensuring a base level of technical understanding among astronomers; the role that amateurs can take moving forward; the impact of telescope training on using survey data effectively; and the need for modest investments in new, standard instrumentation at mid-size aperture telescope facilities to ensure their usefulness for the next decade.



قيم البحث

اقرأ أيضاً

Partial measurements of relative position are a relatively common event during the observation of visual binary stars. However, these observations are typically discarded when estimating the orbit of a visual pair. In this article we present a novel framework to characterize the orbits from a Bayesian standpoint, including partial observations of relative position as an input for the estimation of orbital parameters. Our aim is to formally incorporate the information contained in those partial measurements in a systematic way into the final inference. In the statistical literature, an imputation is defined as the replacement of a missing quantity with a plausible value. To compute posterior distributions of orbital parameters with partial observations, we propose a technique based on Markov chain Monte Carlo with multiple imputation. We present the methodology and test the algorithm with both synthetic and real observations, studying the effect of incorporating partial measurements in the parameter estimation. Our results suggest that the inclusion of partial measurements into the characterization of visual binaries may lead to a reduction in the uncertainty associated to each orbital element, in terms of a decrease in dispersion measures (such as the interquartile range) of the posterior distribution of relevant orbital parameters. The extent to which the uncertainty decreases after the incorporation of new data (either complete or partial) depends on how informative those newly-incorporated measurements are. Quantifying the information contained in each measurement remains an open issue.
The problem of astrometry is revisited from the perspective of analyzing the attainability of well-known performance limits (the Cramer-Rao bound) for the estimation of the relative position of light-emitting (usually point-like) sources on a CCD-lik e detector using commonly adopted estimators such as the weighted least squares and the maximum likelihood. Novel technical results are presented to determine the performance of an estimator that corresponds to the solution of an optimization problem in the context of astrometry. Using these results we are able to place stringent bounds on the bias and the variance of the estimators in close form as a function of the data. We confirm these results through comparisons to numerical simulations under a broad range of realistic observing conditions. The maximum likelihood and the weighted least square estimators are analyzed. We confirm the sub-optimality of the weighted least squares scheme from medium to high signal-to-noise found in an earlier study for the (unweighted) least squares method. We find that the maximum likelihood estimator achieves optimal performance limits across a wide range of relevant observational conditions. Furthermore, from our results, we provide concrete insights for adopting an adaptive weighted least square estimator that can be regarded as a computationally efficient alternative to the optimal maximum likelihood solution. We provide, for the first time, close-form analytical expressions that bound the bias and the variance of the weighted least square and maximum likelihood implicit estimators for astrometry using a Poisson-driven detector. These expressions can be used to formally assess the precision attainable by these estimators in comparison with the minimum variance bound.
We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a clos ed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated condition) the least-squares estimator is near optimal, as its performance asymptotically approaches the Cramer-Rao bound. However, we also demonstrate that, in general, there is no unbiased estimator for the astrometric position that can precisely reach the Cramer-Rao bound. We validate our theoretical analysis through simulated digital-detector observations under typical observing conditions. We show that the nominal value for the mean-square-error of the least-squares estimator (obtained from our theorem) can be used as a benchmark indicator of the expected statistical performance of the least-squares method under a wide range of conditions. Our results are valid for an idealized linear (one-dimensional) array detector where intra-pixel response changes are neglected, and where flat-fielding is achieved with very high accuracy.
Model fitting is possibly the most extended problem in science. Classical approaches include the use of least-squares fitting procedures and maximum likelihood methods to estimate the value of the parameters in the model. However, in recent years, Ba yesian inference tools have gained traction. Usually, Markov chain Monte Carlo methods are applied to inference problems, but they present some disadvantages, particularly when comparing different models fitted to the same dataset. Other Bayesian methods can deal with this issue in a natural and effective way. We have implemented an importance sampling algorithm adapted to Bayesian inference problems in which the power of the noise in the observations is not known a priori. The main advantage of importance sampling is that the model evidence can be derived directly from the so-called importance weights -- while MCMC methods demand considerable postprocessing. The use of our adaptive target, adaptive importance sampling (ATAIS) method is shown by inferring, on the one hand, the parameters of a simulated flaring event which includes a damped oscillation {and, on the other hand, real data from the Kepler mission. ATAIS includes a novel automatic adaptation of the target distribution. It automatically estimates the variance of the noise in the model. ATAIS admits parallelisation, which decreases the computational run-times notably. We compare our method against a nested sampling method within a model selection problem.
POLAR is a compact space-borne detector initially designed to measure the polarization of hard X-rays emitted from Gamma-Ray Bursts in the energy range 50-500keV. This instrument was launched successfully onboard the Chinese space laboratory Tiangong -2 (TG-2) on 2016 September 15. After being switched on a few days later, tens of gigabytes of raw detection data were produced in-orbit by POLAR and transferred to the ground every day. Before the launch date, a full pipeline and related software were designed and developed for the purpose of quickly pre-processing all the raw data from POLAR, which include both science data and engineering data, then to generate the high level scientific data products that are suitable for later science analysis. This pipeline has been successfully applied for use by the POLAR Science Data Center in the Institute of High Energy Physics (IHEP) after POLAR was launched and switched on. A detailed introduction to the pipeline and some of the core relevant algorithms are presented in this paper.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا