No Arabic abstract
The Internet-of-Things, complex sensor networks, multi-agent cyber-physical systems are all examples of spatially distributed systems that continuously evolve in time. Such systems generate huge amounts of spatio-temporal data, and system designers are often interested in analyzing and discovering structure within the data. There has been considerable interest in learning causal and logical properties of temporal data using logics such as Signal Temporal Logic (STL); however, there is limited work on discovering such relations on spatio-temporal data. We propose the first set of algorithms for unsupervised learning for spatio-temporal data. Our method does automatic feature extraction from the spatio-temporal data by projecting it onto the parameter space of a parametric spatio-temporal reach and escape logic (PSTREL). We propose an agglomerative hierarchical clustering technique that guarantees that each cluster satisfies a distinct STREL formula. We show that our method generates STREL formulas of bounded description complexity using a novel decision-tree approach which generalizes previous unsupervised learning techniques for Signal Temporal Logic. We demonstrate the effectiveness of our approach on case studies from diverse domains such as urban transportation, epidemiology, green infrastructure, and air quality monitoring.
To accommodate the unprecedented increase of commercial airlines over the next ten years, the Next Generation Air Transportation System (NextGen) has been implemented in the USA that records large-scale Air Traffic Management (ATM) data to make air travel safer, more efficient, and more economical. A key role of collaborative decision making for air traffic scheduling and airspace resource management is the accurate prediction of flight delay. There has been a lot of attempts to apply data-driven methods such as machine learning to forecast flight delay situation using air traffic data of departures and arrivals. However, most of them omit en-route spatial information of airlines and temporal correlation between serial flights which results in inaccuracy prediction. In this paper, we present a novel aviation delay prediction system based on stacked Long Short-Term Memory (LSTM) networks for commercial flights. The system learns from historical trajectories from automatic dependent surveillance-broadcast (ADS-B) messages and uses the correlative geolocations to collect indispensable features such as climatic elements, air traffic, airspace, and human factors data along posterior routes. These features are integrated and then are fed into our proposed regression model. The latent spatio-temporal patterns of data are abstracted and learned in the LSTM architecture. Compared with previous schemes, our approach is demonstrated to be more robust and accurate for large hub airports.
One of the advantages of adopting a Model Based Development (MBD) process is that it enables testing and verification at early stages of development. However, it is often desirable to not only verify/falsify certain formal system specifications, but also to automatically explore the properties that the system satisfies. In this work, we present a framework that enables property exploration for Cyber-Physical Systems. Namely, given a parametric specification with multiple parameters, our solution can automatically infer the ranges of parameters for which the property does not hold on the system. In this paper, we consider parametric specifications in Metric or Signal Temporal Logic (MTL or STL). Using robust semantics for MTL, the parameter mining problem can be converted into a Pareto optimization problem for which we can provide an approximate solution by utilizing stochastic optimization methods. We include algorithms for the exploration and visualization of multi-parametric specifications. The framework is demonstrated on an industrial size, high-fidelity engine model as well as examples from related literature.
Inspired by the observation that humans are able to process videos efficiently by only paying attention where and when it is needed, we propose an interpretable and easy plug-in spatial-temporal attention mechanism for video action recognition. For spatial attention, we learn a saliency mask to allow the model to focus on the most salient parts of the feature maps. For temporal attention, we employ a convolutional LSTM based attention mechanism to identify the most relevant frames from an input video. Further, we propose a set of regularizers to ensure that our attention mechanism attends to coherent regions in space and time. Our model not only improves video action recognition accuracy, but also localizes discriminative regions both spatially and temporally, despite being trained in a weakly-supervised manner with only classification labels (no bounding box labels or time frame temporal labels). We evaluate our approach on several public video action recognition datasets with ablation studies. Furthermore, we quantitatively and qualitatively evaluate our models ability to localize discriminative regions spatially and critical frames temporally. Experimental results demonstrate the efficacy of our approach, showing superior or comparable accuracy with the state-of-the-art methods while increasing model interpretability.
Spatio-temporal forecasting is of great importance in a wide range of dynamical systems applications from atmospheric science, to recent COVID-19 spread modeling. These applications rely on accurate predictions of spatio-temporal structured data reflecting real-world phenomena. A stunning characteristic is that the dynamical system is not only driven by some physics laws but also impacted by the localized factor in spatial and temporal regions. One of the major challenges is to infer the underlying causes, which generate the perceived data stream and propagate the involved causal dynamics through the distributed observing units. Another challenge is that the success of machine learning based predictive models requires massive annotated data for model training. However, the acquisition of high-quality annotated data is objectively manual and tedious as it needs a considerable amount of human intervention, making it infeasible in fields that require high levels of expertise. To tackle these challenges, we advocate a spatio-temporal physics-coupled neural networks (ST-PCNN) model to learn the underlying physics of the dynamical system and further couple the learned physics to assist the learning of the recurring dynamics. To deal with data-acquisition constraints, an active learning mechanism with Kriging for actively acquiring the most informative data is proposed for ST-PCNN training in a partially observable environment. Our experiments on both synthetic and real-world datasets exhibit that the proposed ST-PCNN with active learning converges to near optimal accuracy with substantially fewer instances.
Deep learning models are modern tools for spatio-temporal graph (STG) forecasting. Despite their effectiveness, they require large-scale datasets to achieve better performance and are vulnerable to noise perturbation. To alleviate these limitations, an intuitive idea is to use the popular data augmentation and contrastive learning techniques. However, existing graph contrastive learning methods cannot be directly applied to STG forecasting due to three reasons. First, we empirically discover that the forecasting task is unable to benefit from the pretrained representations derived from contrastive learning. Second, data augmentations that are used for defeating noise are less explored for STG data. Third, the semantic similarity of samples has been overlooked. In this paper, we propose a Spatio-Temporal Graph Contrastive Learning framework (STGCL) to tackle these issues. Specifically, we improve the performance by integrating the forecasting loss with an auxiliary contrastive loss rather than using a pretrained paradigm. We elaborate on four types of data augmentations, which disturb data in terms of graph structure, time domain, and frequency domain. We also extend the classic contrastive loss through a rule-based strategy that filters out the most semantically similar negatives. Our framework is evaluated across three real-world datasets and four state-of-the-art models. The consistent improvements demonstrate that STGCL can be used as an off-the-shelf plug-in for existing deep models.