Do you want to publish a course? Click here

A Baseline for the Commands For Autonomous Vehicles Challenge

141   0   0.0 ( 0 )
 Added by Simon Vandenhende
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The Commands For Autonomous Vehicles (C4AV) challenge requires participants to solve an object referral task in a real-world setting. More specifically, we consider a scenario where a passenger can pass free-form natural language commands to a self-driving car. This problem is particularly challenging, as the language is much less constrained compared to existing benchmarks, and object references are often implicit. The challenge is based on the recent texttt{Talk2Car} dataset. This document provides a technical overview of a model that we released to help participants get started in the competition. The code can be found at https://github.com/talk2car/Talk2Car.



rate research

Read More

The task of visual grounding requires locating the most relevant region or object in an image, given a natural language query. So far, progress on this task was mostly measured on curated datasets, which are not always representative of human spoken language. In this work, we deviate from recent, popular task settings and consider the problem under an autonomous vehicle scenario. In particular, we consider a situation where passengers can give free-form natural language commands to a vehicle which can be associated with an object in the street scene. To stimulate research on this topic, we have organized the emph{Commands for Autonomous Vehicles} (C4AV) challenge based on the recent emph{Talk2Car} dataset (URL: https://www.aicrowd.com/challenges/eccv-2020-commands-4-autonomous-vehicles). This paper presents the results of the challenge. First, we compare the used benchmark against existing datasets for visual grounding. Second, we identify the aspects that render top-performing models successful, and relate them to existing state-of-the-art models for visual grounding, in addition to detecting potential failure cases by evaluating on carefully selected subsets. Finally, we discuss several possibilities for future work.
Pedestrians are arguably one of the most safety-critical road users to consider for autonomous vehicles in urban areas. In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes from a single image. These encompass visual appearance and behavior, and also include the forecasting of road crossing, which is a main safety concern. For this, we introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way. Each field spatially locates pedestrian instances and aggregates attribute predictions over them. This formulation naturally leverages spatial context, making it well suited to low resolution scenarios such as autonomous driving. By increasing the number of attributes jointly learned, we highlight an issue related to the scales of gradients, which arises in MTL with numerous tasks. We solve it by normalizing the gradients coming from different objective functions when they join at the fork in the network architecture during the backward pass, referred to as fork-normalization. Experimental validation is performed on JAAD, a dataset providing numerous attributes for pedestrian analysis from autonomous vehicles, and shows competitive detection and attribute recognition results, as well as a more stable MTL training.
137 - Majid Khonji , Jorge Dias , 2019
A significant barrier to deploying autonomous vehicles (AVs) on a massive scale is safety assurance. Several technical challenges arise due to the uncertain environment in which AVs operate such as road and weather conditions, errors in perception and sensory data, and also model inaccuracy. In this paper, we propose a system architecture for risk-aware AVs capable of reasoning about uncertainty and deliberately bounding the risk of collision below a given threshold. We discuss key challenges in the area, highlight recent research developments, and propose future research directions in three subsystems. First, a perception subsystem that detects objects within a scene while quantifying the uncertainty that arises from different sensing and communication modalities. Second, an intention recognition subsystem that predicts the driving-style and the intention of agent vehicles (and pedestrians). Third, a planning subsystem that takes into account the uncertainty, from perception and intention recognition subsystems, and propagates all the way to control policies that explicitly bound the risk of collision. We believe that such a white-box approach is crucial for future adoption of AVs on a large scale.
The energy of ocean waves is the key distinguishing factor of marine environments compared to other aquatic environments such as lakes and rivers. Waves significantly affect the dynamics of marine vehicles; hence it is imperative to consider the dynamics of vehicles in waves when developing efficient control strategies for autonomous surface vehicles (ASVs). However, most marine simulators available open-source either exclude dynamics of vehicles in waves or use methods with high computational overhead. This paper presents ASVLite, a computationally efficient ASV simulator that uses frequency domain analysis for wave force computation. ASVLite is suitable for applications requiring low computational overhead and high run-time performance. Our tests on a Raspberry Pi 2 and a mid-range desktop computer show that the simulator has a high run-time performance to efficiently simulate irregular waves with a component wave count of up to 260 and large-scale swarms of up to 500 ASVs.
67 - Dong Cao , Lisha Xu 2019
Pedestrian action recognition and intention prediction is one of the core issues in the field of autonomous driving. In this research field, action recognition is one of the key technologies. A large number of scholars have done a lot of work to im-prove the accuracy of the algorithm for the task. However, there are relatively few studies and improvements in the computational complexity of algorithms and sys-tem real-time. In the autonomous driving application scenario, the real-time per-formance and ultra-low latency of the algorithm are extremely important evalua-tion indicators, which are directly related to the availability and safety of the au-tonomous driving system. To this end, we construct a bypass enhanced RGB flow model, which combines the previous two-branch algorithm to extract RGB feature information and optical flow feature information respectively. In the train-ing phase, the two branches are merged by distillation method, and the bypass enhancement is combined in the inference phase to ensure accuracy. The real-time behavior of the behavior recognition algorithm is significantly improved on the premise that the accuracy does not decrease. Experiments confirm the superiority and effectiveness of our algorithm.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا