ترغب بنشر مسار تعليمي؟ اضغط هنا

The Quo Vadis submission at Traffic4cast 2019

114   0   0.0 ( 0 )
 نشر من قبل Dan Oneata
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We describe the submission of the Quo Vadis team to the Traffic4cast competition, which was organized as part of the NeurIPS 2019 series of challenges. Our system consists of a temporal regression module, implemented as $1times1$ 2d convolutions, augmented with spatio-temporal biases. We have found that using biases is a straightforward and efficient way to include seasonal patterns and to improve the performance of the temporal regression model. Our implementation obtains a mean squared error of $9.47times 10^{-3}$ on the test data, placing us on the eight place team-wise. We also present our attempts at incorporating spatial correlations into the model; however, contrary to our expectations, adding this type of auxiliary information did not benefit the main system. Our code is available at https://github.com/danoneata/traffic4cast.

قيم البحث

اقرأ أيضاً

129 - Monika Blanke 2017
We review the recent highlights of theoretical flavour physics, based on the theory summary talk given at FPCP2017. Over the past years, a number of intriguing anomalies have emerged in flavour violating $K$ and $B$ meson decays, constituting some of the most promising hints for the presence of physics beyond the Standard Model. We discuss the theory status of these anomalies and outline possible future directions to test the underlying New Physics.
The goal of the IARAI competition traffic4cast was to predict the city-wide traffic status within a 15-minute time window, based on information from the previous hour. The traffic status was given as multi-channel images (one pixel roughly correspond s to 100x100 meters), where one channel indicated the traffic volume, another one the average speed of vehicles, and a third one their rough heading. As part of our work on the competition, we evaluated many different network architectures, analyzed the statistical properties of the given data in detail, and thought about how to transform the problem to be able to take additional spatio-temporal context-information into account, such as the street network, the positions of traffic lights, or the weather. This document summarizes our efforts that led to our best submission, and gives some insights about which other approaches we evaluated, and why they did not work as well as imagined.
We present the mini-proceedings of the workshops Hadronic contributions to the muon anomalous magnetic moment: strategies for improvements of the accuracy of the theoretical prediction and $(g-2)_{mu}$: Quo vadis?, both held in Mainz from April 1$^{r m rst}$ to 5$^{rm th}$ and from April 7$^{rm th}$ to 10$^{rm th}$, 2014, respectively.
This work describes the speaker verification system developed by Human Language Technology Laboratory, National University of Singapore (HLT-NUS) for 2019 NIST Multimedia Speaker Recognition Evaluation (SRE). The multimedia research has gained attent ion to a wide range of applications and speaker recognition is no exception to it. In contrast to the previous NIST SREs, the latest edition focuses on a multimedia track to recognize speakers with both audio and visual information. We developed separate systems for audio and visual inputs followed by a score level fusion of the systems from the two modalities to collectively use their information. The audio systems are based on x-vector based speaker embedding, whereas the face recognition systems are based on ResNet and InsightFace based face embeddings. With post evaluation studies and refinements, we obtain an equal error rate (EER) of 0.88% and an actual detection cost function (actDCF) of 0.026 on the evaluation set of 2019 NIST multimedia SRE corpus.
This technical report presents an overview of our solution used in the submission to ActivityNet Challenge 2019 Task 1 (textbf{temporal action proposal generation}) and Task 2 (textbf{temporal action localization/detection}). Temporal action proposal indicates the temporal intervals containing the actions and plays an important role in temporal action localization. Top-down and bottom-up methods are the two main categories used for proposal generation in the existing literature. In this paper, we devise a novel Multi-Granularity Fusion Network (MGFN) to combine the proposals generated from different frameworks for complementary filtering and confidence re-ranking. Specifically, we consider the diversity comprehensively from multiple perspectives, e.g. the characteristic aspect, the data aspect, the model aspect and the result aspect. Our MGFN achieves the state-of-the-art performance on the temporal action proposal task with 69.85 AUC score and the temporal action localization task with 38.90 mAP on the challenge testing set.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا