Do you want to publish a course? Click here

A Co-analysis Framework for Exploring Multivariate Scientific Data

64   0   0.0 ( 0 )
 Added by Xiangyang He
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In complex multivariate data sets, different features usually include diverse associations with different variables, and different variables are associated within different regions. Therefore, exploring the associations between variables and voxels locally becomes necessary to better understand the underlying phenomena. In this paper, we propose a co-analysis framework based on biclusters, which are two subsets of variables and voxels with close scalar-value relationships, to guide the process of visually exploring multivariate data. We first automatically extract all meaningful biclusters, each of which only contains voxels with a similar scalar-value pattern over a subset of variables. These biclusters are organized according to their variable sets, and biclusters in each variable set are further grouped by a similarity metric to reduce redundancy and support diversity during visual exploration. Biclusters are visually represented in coordinated views to facilitate interactive exploration of multivariate data based on the similarity between biclusters and the correlation of scalar values with different variables. Experiments on several representative multivariate scientific data sets demonstrate the effectiveness of our framework in exploring local relationships among variables, biclusters and scalar values in the data.



rate research

Read More

The Italian AGILE space mission, with its Gamma-Ray Imaging Detector (GRID) instrument sensitive in the 30 MeV-50 GeV gamma-ray energy band, has been operating since 2007. Agilepy is an open-source Python package to analyse AGILE/GRID data. The package is built on top of the command-line version of the AGILE Science Tools, developed by the AGILE Team, publicly available and released by ASI/SSDC. The primary purpose of the package is to provide an easy to use high-level interface to analyse AGILE/GRID data by simplifying the configuration of the tasks and ensuring straightforward access to the data. The current features are the generation and display of sky maps and light curves, the access to gamma-ray sources catalogues, the analysis to perform spectral model and position fitting, the wavelet analysis. Agilepy also includes an interface tool providing the time evolution of the AGILE off-axis viewing angle for a chosen sky region. The Flare Advocate team also uses the tool to analyse the data during the daily monitoring of the gamma-ray sky. Agilepy (and its dependencies) can be easily installed using Anaconda.
Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy was the main focus of research but now there is a trend toward assessing the robustness of these models against adversarial attacks by evaluating their tolerance to varying noise levels. In VQA, adversarial attacks can target the image and/or the proposed main question and yet there is a lack of proper analysis of the later. In this work, we propose a flexible framework that focuses on the language part of VQA that uses semantically relevant questions, dubbed basic questions, acting as controllable noise to evaluate the robustness of VQA models. We hypothesize that the level of noise is positively correlated to the similarity of a basic question to the main question. Hence, to apply noise on any given main question, we rank a pool of basic questions based on their similarity by casting this ranking task as a LASSO optimization problem. Then, we propose a novel robustness measure, R_score, and two large-scale basic question datasets (BQDs) in order to standardize robustness analysis for VQA models.
In the multi-messenger era, astronomical projects share information about transients phenomena issuing science alerts to the Scientific Community through different communications networks. This coordination is mandatory to understand the nature of these physical phenomena. For this reason, astrophysical projects rely on real-time analysis software pipelines to identify as soon as possible transients (e.g. GRBs), and to speed up external alerts reaction time. These pipelines can share and receive the science alerts through the Gamma-ray Coordinates Network. This work presents a framework designed to simplify the development of real-time scientific analysis pipelines. The framework provides the architecture and the required automatisms to develop a real-time analysis pipeline, allowing the researchers to focus more on the scientific aspects. The framework has been successfully used to develop real-time pipelines for the scientific analysis of the AGILE space mission data. It is planned to reuse this framework for the Super-GRAWITA and AFISS projects. A possible future use for the Cherenkov Telescope Array (CTA) project is under evaluation.
In high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniques have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent years. Statisticians have found new ways to tune and to combine classifiers to further gain in performance. Integrated into the analysis framework ROOT, TMVA is a toolkit which hosts a large variety of multivariate classification algorithms. Training, testing, performance evaluation and application of all available classifiers is carried out simultaneously via user-friendly interfaces. With version 4, TMVA has been extended to multivariate regression of a real-valued target vector. Regression is invoked through the same user interfaces as classification. TMVA 4 also features more flexible data handling allowing one to arbitrarily form combined MVA methods. A generalised boosting method is the first realisation benefiting from the new framework.
101 - Jinyang Liu , Sheng Di , Kai Zhao 2021
Error-bounded lossy compression is becoming an indispensable technique for the success of todays scientific projects with vast volumes of data produced during the simulations or instrument data acquisitions. Not only can it significantly reduce data size, but it also can control the compression errors based on user-specified error bounds. Autoencoder (AE) models have been widely used in image compression, but few AE-based compression approaches support error-bounding features, which are highly required by scientific applications. To address this issue, we explore using convolutional autoencoders to improve error-bounded lossy compression for scientific data, with the following three key contributions. (1) We provide an in-depth investigation of the characteristics of various autoencoder models and develop an error-bounded autoencoder-based framework in terms of the SZ model. (2) We optimize the compression quality for main stages in our designed AE-based error-bounded compression framework, fine-tuning the block sizes and latent sizes and also optimizing the compression efficiency of latent vectors. (3) We evaluate our proposed solution using five real-world scientific datasets and comparing them with six other related works. Experiments show that our solution exhibits a very competitive compression quality from among all the compressors in our tests. In absolute terms, it can obtain a much better compression quality (100% ~ 800% improvement in compression ratio with the same data distortion) compared with SZ2.1 and ZFP in cases with a high compression ratio.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا