Do you want to publish a course? Click here

VARTOOLS: A Program for Analyzing Astronomical Time-Series Data

75   0   0.0 ( 0 )
 Added by Joel Hartman
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

This paper describes the VARTOOLS program, which is an open-source command-line utility, written in C, for analyzing astronomical time-series data, especially light curves. The program provides a general-purpose set of tools for processing light curves including signal identification, filtering, light curve manipulation, time



rate research

Read More

We present an interactive IDL program for viewing and analyzing astronomical spectra in the context of modern imaging surveys. SpecPros interactive design lets the user simultaneously view spectroscopic, photometric, and imaging data, allowing for rapid object classification and redshift determination. The spectroscopic redshift can be determined with automated cross-correlation against a variety of spectral templates or by overlaying common emission and absorption features on the 1-D and 2-D spectra. Stamp images as well as the spectral energy distribution (SED) of a source can be displayed with the interface, with the positions of prominent photometric features indicated on the SED plot. Results can be saved to file from within the interface. In this paper we discuss key program features and provide an overview of the required data formats.
exoplanet is a toolkit for probabilistic modeling of astronomical time series data, with a focus on observations of exoplanets, using PyMC3 (Salvatier et al., 2016). PyMC3 is a flexible and high-performance model-building language and inference engine that scales well to problems with a large number of parameters. exoplanet extends PyMC3s modeling language to support many of the custom functions and probability distributions required when fitting exoplanet datasets or other astronomical time series. While it has been used for other applications, such as the study of stellar variability, the primary purpose of exoplanet is the characterization of exoplanets or multiple star systems using time-series photometry, astrometry, and/or radial velocity. In particular, the typical use case would be to use one or more of these datasets to place constraints on the physical and orbital parameters of the system, such as planet mass or orbital period, while simultaneously taking into account the effects of stellar variability.
73 - Ce Yu , Kun Li , Shanjiang Tang 2020
Time series data of celestial objects are commonly used to study valuable and unexpected objects such as extrasolar planets and supernova in time domain astronomy. Due to the rapid growth of data volume, traditional manual methods are becoming extremely hard and infeasible for continuously analyzing accumulated observation data. To meet such demands, we designed and implemented a special tool named AstroCatR that can efficiently and flexibly reconstruct time series data from large-scale astronomical catalogues. AstroCatR can load original catalogue data from Flexible Image Transport System (FITS) files or databases, match each item to determine which object it belongs to, and finally produce time series datasets. To support the high-performance parallel processing of large-scale datasets, AstroCatR uses the extract-transform-load (ETL) preprocessing module to create sky zone files and balance the workload. The matching module uses the overlapped indexing method and an in-memory reference table to improve accuracy and performance. The output of AstroCatR can be stored in CSV files or be transformed other into formats as needed. Simultaneously, the module-based software architecture ensures the flexibility and scalability of AstroCatR. We evaluated AstroCatR with actual observation data from The three Antarctic Survey Telescopes (AST3). The experiments demonstrate that AstroCatR can efficiently and flexibly reconstruct all time series data by setting relevant parameters and configuration files. Furthermore, the tool is approximately 3X faster than methods using relational database management systems at matching massive catalogues.
232 - Peter Erwin 2014
I describe a new, open-source astronomical image-fitting program called Imfit, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design which allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include the usual suspects for galaxy decompositions (Sersic, exponential, Gaussian), along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components which perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-S/N galaxy images using chi^2 minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.
We present a new framework to detect various types of variable objects within massive astronomical time-series data. Assuming that the dominant population of objects is non-variable, we find outliers from this population by using a non-parametric Bayesian clustering algorithm based on an infinite GaussianMixtureModel (GMM) and the Dirichlet Process. The algorithm extracts information from a given dataset, which is described by six variability indices. The GMM uses those variability indices to recover clusters that are described by six-dimensional multivariate Gaussian distributions, allowing our approach to consider the sampling pattern of time-series data, systematic biases, the number of data points for each light curve, and photometric quality. Using the Northern Sky Variability Survey data, we test our approach and prove that the infinite GMM is useful at detecting variable objects, while providing statistical inference estimation that suppresses false detection. The proposed approach will be effective in the exploration of future surveys such as GAIA, Pan-Starrs, and LSST, which will produce massive time-series data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا