Do you want to publish a course? Click here

Time Perception Machine: Temporal Point Processes for the When, Where and What of Activity Prediction

326   0   0.0 ( 0 )
 Added by Yatao Zhong
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Numerous powerful point process models have been developed to understand temporal patterns in sequential data from fields such as health-care, electronic commerce, social networks, and natural disaster forecasting. In this paper, we develop novel models for learning the temporal distribution of human activities in streaming data (e.g., videos and person trajectories). We propose an integrated framework of neural networks and temporal point processes for predicting when the next activity will happen. Because point processes are limited to taking event frames as input, we propose a simple yet effective mechanism to extract features at frames of interest while also preserving the rich information in the remaining frames. We evaluate our model on two challenging datasets. The results show that our model outperforms traditional statistical point process approaches significantly, demonstrating its effectiveness in capturing the underlying temporal dynamics as well as the correlation within sequential activities. Furthermore, we also extend our model to a joint estimation framework for predicting the timing, spatial location, and category of the activity simultaneously, to answer the when, where, and what of activity prediction.

rate research

Read More

We review the current state of empirical knowledge of the total budget of baryonic matter in the Universe as observed since the epoch of reionization. Our summary examines on three milestone redshifts since the reionization of H in the IGM, z = 3, 1, and 0, with emphasis on the endpoints. We review the observational techniques used to discover and characterize the phases of baryons. In the spirit of the meeting, the level is aimed at a diverse and non-expert audience and additional attention is given to describe how space missions expected to launch within the next decade will impact this scientific field.
Explaining the decision of a multi-modal decision-maker requires to determine the evidence from both modalities. Recent advances in XAI provide explanations for models trained on still images. However, when it comes to modeling multiple sensory modalities in a dynamic world, it remains underexplored how to demystify the mysterious dynamics of a complex multi-modal model. In this work, we take a crucial step forward and explore learnable explanations for audio-visual recognition. Specifically, we propose a novel space-time attention network that uncovers the synergistic dynamics of audio and visual data over both space and time. Our model is capable of predicting the audio-visual video events, while justifying its decision by localizing where the relevant visual cues appear, and when the predicted sounds occur in videos. We benchmark our model on three audio-visual video event datasets, comparing extensively to multiple recent multi-modal representation learners and intrinsic explanation models. Experimental results demonstrate the clear superior performance of our model over the existing methods on audio-visual video event recognition. Moreover, we conduct an in-depth study to analyze the explainability of our model based on robustness analysis via perturbation tests and pointing games using human annotations.
144 - Yu Yao , Xizi Wang , Mingze Xu 2020
Video anomaly detection (VAD) has been extensively studied. However, research on egocentric traffic videos with dynamic scenes lacks large-scale benchmark datasets as well as effective evaluation metrics. This paper proposes traffic anomaly detection with a textit{when-where-what} pipeline to detect, localize, and recognize anomalous events from egocentric videos. We introduce a new dataset called Detection of Traffic Anomaly (DoTA) containing 4,677 videos with temporal, spatial, and categorical annotations. A new spatial-temporal area under curve (STAUC) evaluation metric is proposed and used with DoTA. State-of-the-art methods are benchmarked for two VAD-related tasks.Experimental results show STAUC is an effective VAD metric. To our knowledge, DoTA is the largest traffic anomaly dataset to-date and is the first supporting traffic anomaly studies across when-where-what perspectives. Our code and dataset can be found in: https://github.com/MoonBlvd/Detection-of-Traffic-Anomaly
In times marked by political turbulence and uncertainty, as well as increasing divisiveness and hyperpartisanship, Governments need to use every tool at their disposal to understand and respond to the concerns of their citizens. We study issues raised by the UK public to the Government during 2015-2017 (surrounding the UK EU-membership referendum), mining public opinion from a dataset of 10,950 petitions (representing 30.5 million signatures). We extract the main issues with a ground-up natural language processing (NLP) method, latent Dirichlet allocation (LDA). We then investigate their temporal dynamics and geographic features. We show that whilst the popularity of some issues is stable across the two years, others are highly influenced by external events, such as the referendum in June 2016. We also study the relationship between petitions issues and where their signatories are geographically located. We show that some issues receive support from across the whole country but others are far more local. We then identify six distinct clusters of constituencies based on the issues which constituents sign. Finally, we validate our approach by comparing the petitions issues with the top issues reported in Ipsos MORI survey data. These results show the huge power of computationally analyzing petitions to understand not only what issues citizens are concerned about but also when and from where.
120 - Qi Yang , Siheng Chen , Yiling Xu 2021
Distortion quantification of point clouds plays a stealth, yet vital role in a wide range of human and machine perception tasks. For human perception tasks, a distortion quantification can substitute subjective experiments to guide 3D visualization; while for machine perception tasks, a distortion quantification can work as a loss function to guide the training of deep neural networks for unsupervised learning tasks. To handle a variety of demands in many applications, a distortion quantification needs to be distortion discriminable, differentiable, and have a low computational complexity. Currently, however, there is a lack of a general distortion quantification that can satisfy all three conditions. To fill this gap, this work proposes multiscale potential energy discrepancy (MPED), a distortion quantification to measure point cloud geometry and color difference. By evaluating at various neighborhood sizes, the proposed MPED achieves global-local tradeoffs, capturing distortion in a multiscale fashion. Extensive experimental studies validate MPEDs superiority for both human and machine perception tasks.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا