Do you want to publish a course? Click here

STFT-LDA: An Algorithm to Facilitate the Visual Analysis of Building Seismic Responses

63   0   0.0 ( 0 )
 Added by Zhenge Zhao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Civil engineers use numerical simulations of a buildings responses to seismic forces to understand the nature of building failures, the limitations of building codes, and how to determine the latter to prevent the former. Such simulations generate large ensembles of multivariate, multiattribute time series. Comprehensive understanding of this data requires techniques that support the multivariate nature of the time series and can compare behaviors that are both periodic and non-periodic across multiple time scales and multiple time series themselves. In this paper, we present a novel technique to extract such patterns from time series generated from simulations of seismic responses. The core of our approach is the use of topic modeling, where topics correspond to interpretable and discriminative features of the earthquakes. We transform the raw time series data into a time series of topics, and use this visual summary to compare temporal patterns in earthquakes, query earthquakes via the topics across arbitrary time scales, and enable details on demand by linking the topic visualization with the original earthquake data. We show, through a surrogate task and an expert study, that this technique allows analysts to more easily identify recurring patterns in such time series. By integrating this technique in a prototype system, we show how it enables novel forms of visual interaction.



rate research

Read More

Annotations in Visual Analytics (VA) have become a common means to support the analysis by integrating additional information into the VA system. That additional information often depends on the current process step in the visual analysis. For example, the data preprocessing step has data structuring operations while the data exploration step focuses on user interaction and input. Describing suitable annotations to meet the goals of the different steps is challenging. To tackle this issue, we identify individual annotations for each step and outline their gathering and design properties for the visual analysis of heterogeneous clinical data. We integrate our annotation design into a visual analysis tool to show its applicability to data from the ophthalmic domain. In interviews and application sessions with experts we asses its usefulness for the analysis of patients with different medications.
The growing use of automated decision-making in critical applications, such as crime prediction and college admission, has raised questions about fairness in machine learning. How can we decide whether different treatments are reasonable or discriminatory? In this paper, we investigate discrimination in machine learning from a visual analytics perspective and propose an interactive visualization tool, DiscriLens, to support a more comprehensive analysis. To reveal detailed information on algorithmic discrimination, DiscriLens identifies a collection of potentially discriminatory itemsets based on causal modeling and classification rules mining. By combining an extended Euler diagram with a matrix-based visualization, we develop a novel set visualization to facilitate the exploration and interpretation of discriminatory itemsets. A user study shows that users can interpret the visually encoded information in DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens provides informative guidance in understanding and reducing algorithmic discrimination.
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2649006 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 25 subjects testing the Brain Invaders (Congedo, 2011), a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders (Taito, Tokyo, Japan). The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. EEG data were recorded by 16 electrodes in an experiment that took place in the GIPSA-lab, Grenoble, France, in 2012 (Van Veen, 2013 and Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA. The ID of this dataset is BI.EEG.2012-GIPSA.
As Internet of Things (IoT) technologies are increasingly being deployed, situations frequently arise where multiple stakeholders must reconcile preferences to control a shared resource. We perform a 5-month long experiment dubbed smartSDH (carried out in 27 employees office space) where users report their preferences for the brightness of overhead lighting. smartSDH implements a modified Vickrey-Clarke-Groves (VCG) mechanism; assuming users are rational, it incentivizes truthful reporting, implements the socially desirable outcome, and compensates participants to ensure higher payoffs under smartSDH when compared with the default outside option(i.e., the option chosen in the absence of such a mechanism). smartSDH assesses the feasibility of the VCG mechanism in the context of smart building control and evaluated smartSDHs effect using metrics such as light level satisfaction, incentive satisfaction, and energy consumption. Although previous studies on the theoretical aspects of the mechanism indicate user satisfaction, our experiments indicate quite the contrary. We found that the participants were significantly less satisfied with light brightness and incentives determined by the VCG mechanism over time. These data suggest the need for more realistic behavioral models to design IoT technologies and highlights difficulties in estimating preferences from observable external factors such as atmospheric conditions.
In SAE Level 3 automated driving, taking over control from automation raises significant safety concerns because drivers out of the vehicle control loop have difficulty negotiating takeover transitions. Existing studies on takeover transitions have focused on drivers behavioral responses to takeover requests (TORs). As a complement, this exploratory study aimed to examine drivers psychophysiological responses to TORs as a result of varying non-driving-related tasks (NDRTs), traffic density and TOR lead time. A total number of 102 drivers were recruited and each of them experienced 8 takeover events in a high fidelity fixed-base driving simulator. Drivers gaze behaviors, heart rate (HR) activities, galvanic skin responses (GSRs), and facial expressions were recorded and analyzed during two stages. First, during the automated driving stage, we found that drivers had lower heart rate variability, narrower horizontal gaze dispersion, and shorter eyes-on-road time when they had a high level of cognitive load relative to a low level of cognitive load. Second, during the takeover transition stage, 4s lead time led to inhibited blink numbers and larger maximum and mean GSR phasic activation compared to 7s lead time, whilst heavy traffic density resulted in increased HR acceleration patterns than light traffic density. Our results showed that psychophysiological measures can indicate specific internal states of drivers, including their workload, emotions, attention, and situation awareness in a continuous, non-invasive and real-time manner. The findings provide additional support for the value of using psychophysiological measures in automated driving and for future applications in driver monitoring systems and adaptive alert systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا