ترغب بنشر مسار تعليمي؟ اضغط هنا

Submodularity in Action: From Machine Learning to Signal Processing Applications

69   0   0.0 ( 0 )
 نشر من قبل Ehsan Tohidi Dr
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Submodularity is a discrete domain functional property that can be interpreted as mimicking the role of the well-known convexity/concavity properties in the continuous domain. Submodular functions exhibit strong structure that lead to efficient optimization algorithms with provable near-optimality guarantees. These characteristics, namely, efficiency and provable performance bounds, are of particular interest for signal processing (SP) and machine learning (ML) practitioners as a variety of discrete optimization problems are encountered in a wide range of applications. Conventionally, two general approaches exist to solve discrete problems: $(i)$ relaxation into the continuous domain to obtain an approximate solution, or $(ii)$ development of a tailored algorithm that applies directly in the discrete domain. In both approaches, worst-case performance guarantees are often hard to establish. Furthermore, they are often complex, thus not practical for large-scale problems. In this paper, we show how certain scenarios lend themselves to exploiting submodularity so as to construct scalable solutions with provable worst-case performance guarantees. We introduce a variety of submodular-friendly applications, and elucidate the relation of submodularity to convexity and concavity which enables efficient optimization. With a mixture of theory and practice, we present different flavors of submodularity accompanying illustrative real-world case studies from modern SP and ML. In all cases, optimization algorithms are presented, along with hints on how optimality guarantees can be established.



قيم البحث

اقرأ أيضاً

In the quest to realize a comprehensive EEG signal processing framework, in this paper, we demonstrate a toolbox and graphic user interface, EEGsig, for the full process of EEG signals. Our goal is to provide a comprehensive suite, free and open-sour ce framework for EEG signal processing where the users especially physicians who do not have programming experience can focus on their practical requirements to speed up the medical projects. Developed on MATLAB software, we have aggregated all the three EEG signal processing steps, including preprocessing, feature extraction, and classification into EEGsig. In addition to a varied list of useful features, in EEGsig, we have implemented three popular classification algorithms (K-NN, SVM, and ANN) to assess the performance of the features. Our experimental results demonstrate that our novel framework for EEG signal processing attained excellent classification results and feature extraction robustness under different machine learning classifier algorithms. Besides, in EEGsig, for selecting the best feature extracted, all EEG signal channels can be visible simultaneously; thus, the effect of each task on the signal can be visible. We believe that our user-centered MATLAB package is an encouraging platform for novice users as well as offering the highest level of control to expert users
Based on the BioBricks standard, restriction synthesis is a novel catabolic iterative DNA synthesis method that utilizes endonucleases to synthesize a query sequence from a reference sequence. In this work, the reference sequence is built from shorte r subsequences by classifying them as applicable or inapplicable for the synthesis method using three different machine learning methods: Support Vector Machines (SVMs), random forest, and Convolution Neural Networks (CNNs). Before applying these methods to the data, a series of feature selection, curation, and reduction steps are applied to create an accurate and representative feature space. Following these preprocessing steps, three different pipelines are proposed to classify subsequences based on their nucleotide sequence and other relevant features corresponding to the restriction sites of over 200 endonucleases. The sensitivity using SVMs, random forest, and CNNs are 94.9%, 92.7%, 91.4%, respectively. Moreover, each method scores lower in specificity with SVMs, random forest, and CNNs resulting in 77.4%, 85.7%, and 82.4%, respectively. In addition to analyzing these results, the misclassifications in SVMs and CNNs are investigated. Across these two models, different features with a derived nucleotide specificity visually contribute more to classification compared to other features. This observation is an important factor when considering new nucleotide sensitivity features for future studies.
Deep learning, particularly convolutional neural networks (CNNs), have yielded rapid, significant improvements in computer vision and related domains. But conventional deep learning architectures perform poorly when data have an underlying graph stru cture, as in social, biological, and many other domains. This paper explores 1)how graph signal processing (GSP) can be used to extend CNN components to graphs in order to improve model performance; and 2)how to design the graph CNN architecture based on the topology or structure of the data graph.
90 - Songyang Zhang , Qinwen Deng , 2021
This work introduces a tensor-based framework of graph signal processing over multilayer networks (M-GSP) to analyze high-dimensional signal interactions. Following Part Is introduction of fundamental definitions and spectrum properties of M-GSP, thi s second Part focuses on more detailed discussions of implementation and applications of M-GSP. Specifically, we define the concepts of stationary process, convolution, bandlimited signals, and sampling theory over multilayer networks. We also develop fundamentals of filter design and derive approximated methods of spectrum estimation within the proposed framework. For practical applications, we further present several MLN-based methods for signal processing and data analysis. Our experimental results demonstrate significant performance improvement using our M-GSP framework over traditional signal processing solutions.
We present the data reduction software and the distribution of Level 1 and Level 2 products of the Stratospheric Terahertz Observatory 2 (STO2). STO2, a balloon-borne Terahertz telescope, surveyed star-forming regions and the Galactic plane and produ ced approximately 300,000 spectra. The data are largely similar to spectra typically produced by single-dish radio telescopes. However, a fraction of the data contained rapidly varying fringe/baseline features and drift noise, which could not be adequately corrected using conventional data reduction software. To process the entire science data of the STO2 mission, we have adopted a new method to find proper off-source spectra to reduce large-amplitude fringes and new algorithms including Asymmetric Least Square (ALS), Independent Component Analysis (ICA), and Density-based spatial clustering of applications with noise (DBSCAN). The STO2 data reduction software efficiently reduced the amplitude of fringes from a few hundred to 10 K and resulted in baselines of amplitude down to a few K. The Level 1 products typically have the noise of a few K in [CII] spectra and ~1 K in [NII] spectra. Using a regridding algorithm, we made spectral maps of star-forming regions and the Galactic plane survey using an algorithm employing a Bessel-Gaussian kernel. Level 1 and 2 products are available to the astronomical community through the STO2 data server and the DataVerse. The software is also accessible to the public through Github. The detailed addresses are given in Section 4 of the paper on data distribution.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا