ترغب بنشر مسار تعليمي؟ اضغط هنا

Sky in Google Earth: The Next Frontier in Astronomical Data Discovery and Visualization

99   0   0.0 ( 0 )
 نشر من قبل Ryan Scranton
 تاريخ النشر 2007
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Astronomy began as a visual science, first through careful observations of the sky using either an eyepiece or the naked eye, then on to the preservation of those images with photographic media and finally the digital encoding of that information via CCDs. This last step has enabled astronomy to move into a fully automated era -- where data is recorded, analyzed and interpreted often without any direct visual inspection. Sky in Google Earth completes that circle by providing an intuitive visual interface to some of the largest astronomical imaging surveys covering the full sky. By streaming imagery, catalogs, time domain data, and ancillary information directly to a user, Sky can provide the general public as well as professional and amateur astronomers alike with a wealth of information for use in education and research. We provide here a brief introduction to Sky in Google Earth, focusing on its extensible environment, how it may be integrated into the research process and how it can bring astronomical research to a broader community. With an open interface available on Linux, Mac OS X and Windows, applications developed within Sky are accessible not just within the Google framework but through any visual browser that supports the Keyhole Markup Language. We present Sky as the embodiment of a virtual telescope.



قيم البحث

اقرأ أيضاً

65 - C. Bordiu 2020
We report the outcomes of a survey that explores the current practices, needs and expectations of the astrophysics community, concerning four research aspects: open science practices, data access and management, data visualization, and data analysis. The survey, involving 329 professionals from several research institutions, pinpoints significant gaps in matters such as results reproducibility, availability of visual analytics tools and adoption of Machine Learning techniques for data analysis. This research is conducted in the context of the H2020 NEANIAS project.
We present a high-performance, graphics processing unit (GPU)-based framework for the efficient analysis and visualization of (nearly) terabyte (TB)-sized 3-dimensional images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image: (1) volume rendering using an arbitrary transfer function at 7--10 frames per second; (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s; (3) evaluation of the image histogram in 4 s; and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching one teravoxel per second, and are 10--100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly, we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array pathfinder radiotelescopes.
Wide-angle surveys have been an engine for new discoveries throughout the modern history of astronomy, and have been among the most highly cited and scientifically productive observing facilities in recent years. This trend is likely to continue over the next decade, as many of the most important questions in astrophysics are best tackled with massive surveys, often in synergy with each other and in tandem with the more traditional observatories. We argue that these surveys are most productive and have the greatest impact when the data from the surveys are made public in a timely manner. The rise of the survey astronomer is a substantial change in the demographics of our field; one of the most important challenges of the next decade is to find ways to recognize the intellectual contributions of those who work on the infrastructure of surveys (hardware, software, survey planning and operations, and databases/data distribution), and to make career paths to allow them to thrive.
116 - A. H. Hassan , C. J. Fluke , 2011
Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, t he increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed todays single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a software as a service manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.
We present variability analysis of data from the Northern Sky Variability Survey (NSVS). Using the clustering method which defines variable candidates as outliers from large clusters, we cluster 16,189,040 light curves, having data points at more tha n 15 epochs, as variable and non-variable candidates in 638 NSVS fields. Variable candidates are selected depending on how strongly they are separated from the largest cluster and how rarely they are grouped together in eight dimensional space spanned by variability indices. All NSVS light curves are also cross-correlated to the Infrared Astronomical Satellite, AKARI, Two Micron All Sky Survey, Sloan Digital Sky Survey (SDSS), and Galaxy Evolution Explorer objects as well as known objects in the SIMBAD database. The variability analysis and cross-correlation results are provided in a public online database which can be used to select interesting objects for further investigation. Adopting conservative selection criteria for variable candidates, we find about 1.8 million light curves as possible variable candidates in the NSVS data, corresponding to about 10% of our entire NSVS samples. Multi-wavelength colors help us find specific types of variability among the variable candidates. Moreover, we also use morphological classification from other surveys such as SDSS to suppress spurious cases caused by blending objects or extended sources due to the low angular resolution of the NSVS.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا