Do you want to publish a course? Click here

gPhoton: The GALEX Photon Data Archive

56   0   0.0 ( 0 )
 Added by Chase Million
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The projects stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope (MAST). This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command line modules serve as a front-end to interact with the database and to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time domain science capabilities enabled by gPhoton, we show new flares from the known M dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.



rate research

Read More

The Large sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest optical telescope in China. In last four years, the LAMOST telescope has published four editions data (pilot data release, data release 1, data release 2 and data release 3). To archive and release these data (raw data, catalog, spectrum etc), we have set up a data cycle management system, including the transfer of data, archiving, backup. And through the evolution of four softwa
The Parkes pulsar data archive currently provides access to 144044 data files obtained from observations carried out at the Parkes observatory since the year 1991. Around 10^5 files are from surveys of the sky, the remainder are observations of 775 individual pulsars and their corresponding calibration signals. Survey observations are included from the Parkes 70cm and the Swinburne Intermediate Latitude surveys. Individual pulsar observations are included from young pulsar timing projects, the Parkes Pulsar Timing Array and from the PULSE@Parkes outreach program. The data files and access methods are compatible with Virtual Observatory protocols. This paper describes the data currently stored in the archive and presents ways in which these data can be searched and downloaded.
The Parkes Pulsar Data Archive currently provides access to 165,755 data files obtained from observations carried out at the Parkes Observatory since the year 1991. Data files and access methods are compliant with the Virtual Observatory protocol. This paper provides a tutorial on how to make use of the Parkes Pulsar Data Archive and provides example queries using on-line interfaces.
266 - Rob Seaman 2014
From the moment astronomical observations are made the resulting data products begin to grow stale. Even if perfect binary copies are preserved through repeated timely migration to more robust storage media, data standards evolve and new tools are created that require different kinds of data or metadata. The expectations of the astronomical community change even if the data do not. We discuss data engineering to mitigate the ensuing risks with examples from a recent project to refactor seven million archival images to new standards of nomenclature, metadata, format, and compression.
Context: The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoply of methods used for data exploration, visualisation is often the starting point and even the guiding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims: We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods: The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results: The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. [abridged]
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا