ترغب بنشر مسار تعليمي؟ اضغط هنا

Data storage, processing and visualisation for the ATCA

42   0   0.0 ( 0 )
 نشر من قبل Tara Murphy
 تاريخ النشر 2006
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present three Virtual Observatory tools developed at the ATNF for the storage, processing and visualisation of ATCA data. These are the Australia Telescope Online Archive, a prototype data reduction pipeline, and the Remote Visualisation System. These tools were developed in the context of the Virtual Observatory and were intended to be both useful for astronomers and technology demonstrators. We discuss the design and implementation of these tools, as well as issues that should be considered when developing similar systems for future telescopes.

قيم البحث

اقرأ أيضاً

313 - J. Urban , J. Pipek , M. Hron 2014
We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (Compass DataBase), integrates different data sources as an assortment of data acquisition hardware and softwar e from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.
Compressed videos constitute 70% of Internet traffic, and video upload growth rates far outpace compute and storage improvement trends. Past work in leveraging perceptual cues like saliency, i.e., regions where viewers focus their perceptual attentio n, reduces compressed video size while maintaining perceptual quality, but requires significant changes to video codecs and ignores the data management of this perceptual information. In this paper, we propose Vignette, a compression technique and storage manager for perception-based video compression. Vignette complements off-the-shelf compression software and hardware codec implementations. Vignettes compression technique uses a neural network to predict saliency information used during transcoding, and its storage manager integrates perceptual information into the video storage system to support a perceptual compression feedback loop. Vignettes saliency-based optimizations reduce storage by up to 95% with minimal quality loss, and Vignette videos lead to power savings of 50% on mobile phones during video playback. Our results demonstrate the benefit of embedding information about the human visual system into the architecture of video storage systems.
Fast pixelated detectors incorporating direct electron detection (DED) technology are increasingly being regarded as universal detectors for scanning transmission electron microscopy (STEM), capable of imaging under multiple modes of operation. Howev er, several issues remain around the post acquisition processing and visualisation of the often very large multidimensional STEM datasets produced by them. We discuss these issues and present open source software libraries to enable efficient processing and visualisation of such datasets. Throughout, we provide examples of the analysis methodologies presented, utilising data from a 256$times$256 pixel Medipix3 hybrid DED detector, with a particular focus on the STEM characterisation of the structural properties of materials. These include the techniques of virtual detector imaging; higher order Laue zone analysis; nanobeam electron diffraction; and scanning precession electron diffraction. In the latter, we demonstrate nanoscale lattice parameter mapping with a fractional precision $le 6times10^{-4}$ (0.06%).
Context: The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoply of methods used for data exploration, visualisation is often the starting point and even the gui ding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims: We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods: The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results: The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. [abridged]
85 - T. Weber , R. Georgii , P. Boni 2019
Due to the instruments non-trivial resolution function, measurements on triple-axis spectrometers require extra care from the experimenter in order to obtain optimal results and to avoid unwanted spurious artefacts. We present a free and open-source software system that aims to ease many of the tasks encountered during the planning phase, in the execution and in data treatment of experiments performed on neutron triple-axis spectrometers. The software is currently in use and has been successfully tested at the MLZ, but can be configured to work with other triple-axis instruments and instrument control systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا