ترغب بنشر مسار تعليمي؟ اضغط هنا

SigViewer: Visualizing Multimodal Signals Stored in XDF (Extensible Data Format) Files

148   0   0.0 ( 0 )
 نشر من قبل Yida Lin
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Multimodal biosignal acquisition is facilitated by recently introduced software solutions such as LabStreaming Layer (LSL) and its associated data format XDF (Extensible Data Format). However, there are no stand-alone applications that can visualize multimodal time series stored in XDF files. We extended SigViewer, an open source cross-platform Qt C++ application with the capability of loading, resampling, annotating, and visualizing signals stored in XDF files and successfully applied the tool for post-hoc visual verification of the accuracy of a system that aims to predict the phase of alpha oscillations within the electroencephalogram in real-time.

قيم البحث

اقرأ أيضاً

A standard file format is proposed to store process and event information, primarily output from parton-level event generators for further use by general-purpose ones. The information content is identical with what was already defined by the Les Houc hes Accord five years ago, but then in terms of Fortran commonblocks. This information is embedded in a minimal XML-style structure, for clarity and to simplify parsing.
In order to allow different software applications, in constant evolution, to interact and exchange data, flexible file formats are needed. A file format specification for different types of content has been elaborated to allow communication of data f or the software developed within the European Network of Excellence NANOQUANTA, focusing on first-principles calculations of materials and nanosystems. It might be used by other software as well, and is described here in detail. The format relies on the NetCDF binary input/output library, already used in many different scientific communities, that provides flexibility as well as portability accross languages and platforms. Thanks to NetCDF, the content can be accessed by keywords, ensuring the file format is extensible and backward compatible.
Combination of low-tensor rank techniques and the Fast Fourier transform (FFT) based methods had turned out to be prominent in accelerating various statistical operations such as Kriging, computing conditional covariance, geostatistical optimal desig n, and others. However, the approximation of a full tensor by its low-rank format can be computationally formidable. In this work, we incorporate the robust Tensor Train (TT) approximation of covariance matrices and the efficient TT-Cross algorithm into the FFT-based Kriging. It is shown that here the computational complexity of Kriging is reduced to $mathcal{O}(d r^3 n)$, where $n$ is the mode size of the estimation grid, $d$ is the number of variables (the dimension), and $r$ is the rank of the TT approximation of the covariance matrix. For many popular covariance functions the TT rank $r$ remains stable for increasing $n$ and $d$. The advantages of this approach against those using plain FFT are demonstrated in synthetic and real data examples.
108 - J.-C. Liou , F.-G. Tseng 2008
Enhancement of the number and array density of nozzles within an inkjet head chip is one of the keys to raise the printing speed and printing resolutions. However, traditional 2D architecture of driving circuits can not meet the requirement for high scanning speed and low data accessing points when nozzle numbers greater than 1000. This paper proposes a novel architecture of high-selection-speed three-dimensional data registration for inkjet applications. With the configuration of three-dimensional data registration, the number of data accessing points as well as the scanning lines can be greatly reduced for large array inkjet printheads with nozzles numbering more than 1000. This IC (Integrated Circuit) architecture involves three-dimensional multiplexing with the provision of a gating transistor for each ink firing resistor, where ink firing resistors are triggered only by the selection of their associated gating transistors. Three signals: selection (S), address (A), and power supply (P), are employed together to activate a nozzle for droplet ejection. The smart printhead controller has been designed by a 0.35 um CMOS process with a total circuit area, 2500 x 500 microm2, which is 80% of the cirucuit area by 2D configuration for 1000 nozzles. Experiment results demonstrate the functionality of the fabricated IC in operation, signal transmission and a potential to control more than 1000 nozzles with only 31 data access points and reduced 30% scanning time.
Multimodal sentiment analysis aims to recognize peoples attitudes from multiple communication channels such as verbal content (i.e., text), voice, and facial expressions. It has become a vibrant and important research topic in natural language proces sing. Much research focuses on modeling the complex intra- and inter-modal interactions between different communication channels. However, current multimodal models with strong performance are often deep-learning-based techniques and work like black boxes. It is not clear how models utilize multimodal information for sentiment predictions. Despite recent advances in techniques for enhancing the explainability of machine learning models, they often target unimodal scenarios (e.g., images, sentences), and little research has been done on explaining multimodal models. In this paper, we present an interactive visual analytics system, M2Lens, to visualize and explain multimodal models for sentiment analysis. M2Lens provides explanations on intra- and inter-modal interactions at the global, subset, and local levels. Specifically, it summarizes the influence of three typical interaction types (i.e., dominance, complement, and conflict) on the model predictions. Moreover, M2Lens identifies frequent and influential multimodal features and supports the multi-faceted exploration of model behaviors from language, acoustic, and visual modalities. Through two case studies and expert interviews, we demonstrate our system can help users gain deep insights into the multimodal models for sentiment analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا