Do you want to publish a course? Click here

ALMA service data analysis and level 2 quality assurance with CASA

93   0   0.0 ( 0 )
 Added by Dirk Petry
 Publication date 2014
  fields Physics
and research's language is English
 Authors Dirk Petry




Ask ChatGPT about the research

The Atacama Large mm and sub-mm Array (ALMA) radio observatory is one of the worlds largest astronomical projects. After the very successful conclusion of the first observation cycles Early Science Cycles 0 and 1, the ALMA project can report many successes and lessons learned. The science data taken interleaved with commissioning tests for the still continuing addition of new capabilities has already resulted in numerous publications in high-profile journals. The increasing data volume and complexity are challenging but under control. The radio-astronomical data analysis package Common Astronomy Software Applications (CASA) has played a crucial role in this effort. This article describes the implementation of the ALMA data quality assurance system, in particular the level 2 which is based on CASA, and the lessons learned.



rate research

Read More

The MAJORANA DEMONSTRATOR is an experiment constructed to search for neutrinoless double-beta decays in germanium-76 and to demonstrate the feasibility to deploy a large-scale experiment in a phased and modular fashion. It consists of two modular arrays of natural and $^{76}$Ge-enriched germanium detectors totalling 44.1 kg, located at the 4850 level of the Sanford Underground Research Facility in Lead, South Dakota, USA. Any neutrinoless double-beta decay search requires a thorough understanding of the background and the signal energy spectra. The various techniques employed to ensure the integrity of the measured spectra are discussed. Data collection is monitored with a thorough set of checks, and subsequent careful analysis is performed to qualify the data for higher level physics analysis. Instrumental background events are tagged for removal, and problematic channels are removed from consideration as necessary.
Supervised training of an automated medical image analysis system often requires a large amount of expert annotations that are hard to collect. Moreover, the proportions of data available across different classes may be highly imbalanced for rare diseases. To mitigate these issues, we investigate a novel data augmentation pipeline that selectively adds new synthetic images generated by conditional Adversarial Networks (cGANs), rather than extending directly the training set with synthetic images. The selection mechanisms that we introduce to the synthetic augmentation pipeline are motivated by the observation that, although cGAN-generated images can be visually appealing, they are not guaranteed to contain essential features for classification performance improvement. By selecting synthetic images based on the confidence of their assigned labels and their feature similarity to real labeled images, our framework provides quality assurance to synthetic augmentation by ensuring that adding the selected synthetic images to the training set will improve performance. We evaluate our model on a medical histopathology dataset, and two natural image classification benchmarks, CIFAR10 and SVHN. Results on these datasets show significant and consistent improvements in classification performance (with 6.8%, 3.9%, 1.6% higher accuracy, respectively) by leveraging cGAN generated images with selective augmentation.
67 - M. Janssen , C. Goddi , H. Falcke 2019
Currently, HOPS and AIPS are the primary choices for the time-consuming process of (millimeter) Very Long Baseline Interferometry (VLBI) data calibration. However, for a full end-to-end pipeline, they either lack the ability to perform easily scriptable incremental calibration or do not provide full control over the workflow with the ability to manipulate and edit calibration solutions directly. The Common Astronomy Software Application (CASA) offers all these abilities, together with a secure development future and an intuitive Python interface, which is very attractive for young radio astronomers. Inspired by the recent addition of a global fringe-fitter, the capability to convert FITS-IDI files to measurement sets, and amplitude calibration routines based on ANTAB metadata, we have developed the the CASA-based Radboud PIpeline for the Calibration of high Angular Resolution Data (rPICARD). The pipeline will be able to handle data from multiple arrays: EHT, GMVA, VLBA and the EVN in the first release. Polarization and phase-referencing calibration are supported and a spectral line mode will be added in the future. The large bandwidths of future radio observatories ask for a scalable reduction software. Within CASA, a message passing interface (MPI) implementation is used for parallelization, reducing the total time needed for processing. The most significant gain is obtained for the time-consuming fringe-fitting task where each scan be processed in parallel.
855 - D.L. Zhang , M. Gao , X.L. Sun 2021
The Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM) satellite consists of two small satellites. Each GECAM payload contains 25 gamma ray detectors (GRD) and 8 charged particle detectors (CPD). GRD is the main detect or which can detect gamma-rays and particles and localize the Gamma-Ray Bursts (GRB),while CPD is used to help GRD to discriminate gamma-ray bursts and charged particle bursts. The GRD makes use of lanthanum bromide (LaBr3) crystal readout by SiPM. As the all available SiPM devices belong to commercial grade, quality assurance tests need to be performed in accordance with the aerospace specifications. In this paper, we present the results of quality assurance tests, especially a detailed mechanism analysis of failed devices during the development of GECAM. This paper also summarizes the application experience of commercial-grade SiPM devices in aerospace payloads, and provides suggestions for forthcoming SiPM space applications.
Many astronomy data centres still work on filesystems. Industry has moved on; current practice in computing infrastructure is to achieve Big Data scalability using object stores rather than POSIX file systems. This presents us with opportunities for portability and reuse of software underlying processing and archive systems but it also causes problems for legacy implementations in current data centers.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا