Do you want to publish a course? Click here

Creating A Galactic Plane Atlas With Amazon Web Services

113   0   0.0 ( 0 )
 Added by Bruce Berriman
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

This paper describes by example how astronomers can use cloud-computing resources offered by Amazon Web Services (AWS) to create new datasets at scale. We have created from existing surveys an atlas of the Galactic Plane at 16 wavelengths from 1 {mu}m to 24 {mu}m with pixels co-registered at spatial sampling of 1 arcsec. We explain how open source tools support management and operation of a virtual cluster on AWS platforms to process data at scale, and describe the technical issues that users will need to consider, such as optimization of resources, resource costs, and management of virtual machine instances.



rate research

Read More

Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.
Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site (http://herschel.vo.elte.hu) and also as a set of REST web service functions.
We describe our custom processing of the entire Wide-field Infrared Survey Explorer (WISE) 12 micron imaging data set, and present a high-resolution, full-sky map of diffuse Galactic dust emission that is free of compact sources and other contaminating artifacts. The principal distinctions between our resulting co-added images and the WISE Atlas stacks are our removal of compact sources, including their associated electronic and optical artifacts, and our preservation of spatial modes larger than 1.5 degrees. We provide access to the resulting full-sky map via a set of 430 12.5 degree by 12.5 degree mosaics. These stacks have been smoothed to 15 resolution and are accompanied by corresponding coverage maps, artifact images, and bit-masks for point sources, resolved compact sources, and other defects. When combined appropriately with other mid-infrared and far-infrared data sets, we expect our WISE 12 micron co-adds to form the basis for a full-sky dust extinction map with angular resolution several times better than Schlegel et al. (1998).
110 - Q. Remy , L. Tibaldo , F. Acero 2021
Observations with the current generation of very-high-energy gamma-ray telescopes have revealed an astonishing variety of particle accelerators in the Milky Way, such as supernova remnants, pulsar wind nebulae, and binary systems. The upcoming Cherenkov Telescope Array (CTA) will be the first instrument to enable a survey of the entire Galactic plane in the energy range from a few tens of GeV to 300 TeV with unprecedented sensitivity and improved angular resolution. In this contribution we will revisit the scientific motivations for the survey, proposed as a Key ScienceProject for CTA. We will highlight recent progress, including improved physically-motivated models for Galactic source populations and interstellar emission, advance on the optimization of the survey strategy, and the development of pipelines to derive source catalogues tested on simulated data. Based on this, we will provide a new forecast on the properties of the sources thatCTA will detect and discuss the expected scientific return from the study of gamma-ray source populations.
Atomizing various Web activities by replacing human to human interactions on the Internet has been made indispensable due to its enormous growth. However, bots also known as Web-bots which have a malicious intend and pretending to be humans pose a severe threat to various services on the Internet that implicitly assume a human interaction. Accordingly, Web service providers before allowing access to such services use various Human Interaction Proofs (HIPs) to authenticate that the user is a human and not a bot. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a class of HIPs tests and are based on Artificial Intelligence. These tests are easier for humans to qualify and tough for bots to simulate. Several Web services use CAPTCHAs as a defensive mechanism against automated Web-bots. In this paper, we review the existing CAPTCHA schemes that have been proposed or are being used to protect various Web services. We classify them in groups and compare them with each other in terms of security and usability. We present general method used to generate and break text-based and image-based CAPTCHAs. Further, we discuss various security and usability issues in CAPTCHA design and provide guidelines for improving their robustness and usability.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا