ترغب بنشر مسار تعليمي؟ اضغط هنا

Applications of Machine Learning Algorithms In Processing Terahertz Spectroscopic Data

62   0   0.0 ( 0 )
 نشر من قبل Youngmin Seo
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present the data reduction software and the distribution of Level 1 and Level 2 products of the Stratospheric Terahertz Observatory 2 (STO2). STO2, a balloon-borne Terahertz telescope, surveyed star-forming regions and the Galactic plane and produced approximately 300,000 spectra. The data are largely similar to spectra typically produced by single-dish radio telescopes. However, a fraction of the data contained rapidly varying fringe/baseline features and drift noise, which could not be adequately corrected using conventional data reduction software. To process the entire science data of the STO2 mission, we have adopted a new method to find proper off-source spectra to reduce large-amplitude fringes and new algorithms including Asymmetric Least Square (ALS), Independent Component Analysis (ICA), and Density-based spatial clustering of applications with noise (DBSCAN). The STO2 data reduction software efficiently reduced the amplitude of fringes from a few hundred to 10 K and resulted in baselines of amplitude down to a few K. The Level 1 products typically have the noise of a few K in [CII] spectra and ~1 K in [NII] spectra. Using a regridding algorithm, we made spectral maps of star-forming regions and the Galactic plane survey using an algorithm employing a Bessel-Gaussian kernel. Level 1 and 2 products are available to the astronomical community through the STO2 data server and the DataVerse. The software is also accessible to the public through Github. The detailed addresses are given in Section 4 of the paper on data distribution.



قيم البحث

اقرأ أيضاً

The sensitivity of searches for astrophysical transients in data from the LIGO is generally limited by the presence of transient, non-Gaussian noise artifacts, which occur at a high-enough rate such that accidental coincidence across multiple detecto rs is non-negligible. Furthermore, non-Gaussian noise artifacts typically dominate over the background contributed from stationary noise. These glitches can easily be confused for transient gravitational-wave signals, and their robust identification and removal will help any search for astrophysical gravitational-waves. We apply Machine Learning Algorithms (MLAs) to the problem, using data from auxiliary channels within the LIGO detectors that monitor degrees of freedom unaffected by astrophysical signals. The number of auxiliary-channel parameters describing these disturbances may also be extremely large; an area where MLAs are particularly well-suited. We demonstrate the feasibility and applicability of three very different MLAs: Artificial Neural Networks, Support Vector Machines, and Random Forests. These classifiers identify and remove a substantial fraction of the glitches present in two very different data sets: four weeks of LIGOs fourth science run and one week of LIGOs sixth science run. We observe that all three algorithms agree on which events are glitches to within 10% for the sixth science run data, and support this by showing that the different optimization criteria used by each classifier generate the same decision surface, based on a likelihood-ratio statistic. Furthermore, we find that all classifiers obtain similar limiting performance, suggesting that most of the useful information currently contained in the auxiliary channel parameters we extract is already being used.
Two-dimensional electronic spectroscopy has become one of the main experimental tools for analyzing the dynamics of excitonic energy transfer in large molecular complexes. Simplified theoretical models are usually employed to extract model parameters from the experimental spectral data. Here we show that computationally expensive but exact theoretical methods encoded into a neural network can be used to extract model parameters and infer structural information such as dipole orientation from two dimensional electronic spectra (2DES) or reversely, to produce 2DES from model parameters. We propose to use machine learning as a tool to predict unknown parameters in the models underlying recorded spectra and as a way to encode computationally expensive numerical methods into efficient prediction tools. We showcase the use of a trained neural network to efficiently compute disordered averaged spectra and demonstrate that disorder averaging has non-trivial effects for polarization controlled 2DES.
Submodularity is a discrete domain functional property that can be interpreted as mimicking the role of the well-known convexity/concavity properties in the continuous domain. Submodular functions exhibit strong structure that lead to efficient optim ization algorithms with provable near-optimality guarantees. These characteristics, namely, efficiency and provable performance bounds, are of particular interest for signal processing (SP) and machine learning (ML) practitioners as a variety of discrete optimization problems are encountered in a wide range of applications. Conventionally, two general approaches exist to solve discrete problems: $(i)$ relaxation into the continuous domain to obtain an approximate solution, or $(ii)$ development of a tailored algorithm that applies directly in the discrete domain. In both approaches, worst-case performance guarantees are often hard to establish. Furthermore, they are often complex, thus not practical for large-scale problems. In this paper, we show how certain scenarios lend themselves to exploiting submodularity so as to construct scalable solutions with provable worst-case performance guarantees. We introduce a variety of submodular-friendly applications, and elucidate the relation of submodularity to convexity and concavity which enables efficient optimization. With a mixture of theory and practice, we present different flavors of submodularity accompanying illustrative real-world case studies from modern SP and ML. In all cases, optimization algorithms are presented, along with hints on how optimality guarantees can be established.
We investigate star-galaxy classification for astronomical surveys in the context of four methods enabling the interpretation of black-box machine learning systems. The first is outputting and exploring the decision boundaries as given by decision tr ee based methods, which enables the visualization of the classification categories. Secondly, we investigate how the Mutual Information based Transductive Feature Selection (MINT) algorithm can be used to perform feature pre-selection. If one would like to provide only a small number of input features to a machine learning classification algorithm, feature pre-selection provides a method to determine which of the many possible input properties should be selected. Third is the use of the tree-interpreter package to enable popular decision tree based ensemble methods to be opened, visualized, and understood. This is done by additional analysis of the tree based model, determining not only which features are important to the model, but how important a feature is for a particular classification given its value. Lastly, we use decision boundaries from the model to revise an already existing method of classification, essentially asking the tree based method where decision boundaries are best placed and defining a new classification method. We showcase these techniques by applying them to the problem of star-galaxy separation using data from the Sloan Digital Sky Survey (hereafter SDSS). We use the output of MINT and the ensemble methods to demonstrate how more complex decision boundaries improve star-galaxy classification accuracy over the standard SDSS frames approach (reducing misclassifications by up to $approx33%$). We then show how tree-interpreter can be used to explore how relevant each photometric feature is when making a classification on an object by object basis.
Machine learning is driving development across many fields in science and engineering. A simple and efficient programming language could accelerate applications of machine learning in various fields. Currently, the programming languages most commonly used to develop machine learning algorithms include Python, MATLAB, and C/C ++. However, none of these languages well balance both efficiency and simplicity. The Julia language is a fast, easy-to-use, and open-source programming language that was originally designed for high-performance computing, which can well balance the efficiency and simplicity. This paper summarizes the related research work and developments in the application of the Julia language in machine learning. It first surveys the popular machine learning algorithms that are developed in the Julia language. Then, it investigates applications of the machine learning algorithms implemented with the Julia language. Finally, it discusses the open issues and the potential future directions that arise in the use of the Julia language in machine learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا