ترغب بنشر مسار تعليمي؟ اضغط هنا

Driver Behavior Extraction from Videos in Naturalistic Driving Datasets with 3D ConvNets

96   0   0.0 ( 0 )
 نشر من قبل Hanwen Miao
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Naturalistic driving data (NDD) is an important source of information to understand crash causation and human factors and to further develop crash avoidance countermeasures. Videos recorded while driving are often included in such datasets. While there is often a large amount of video data in NDD, only a small portion of them can be annotated by human coders and used for research, which underuses all video data. In this paper, we explored a computer vision method to automatically extract the information we need from videos. More specifically, we developed a 3D ConvNet algorithm to automatically extract cell-phone-related behaviors from videos. The experiments show that our method can extract chunks from videos, most of which (~79%) contain the automatically labeled cell phone behaviors. In conjunction with human review of the extracted chunks, this approach can find cell-phone-related driver behaviors much more efficiently than simply viewing video.



قيم البحث

اقرأ أيضاً

109 - Zhaobin Mo , Sisi Li , Diange Yang 2018
It is necessary to thoroughly evaluate the effectiveness and safety of Connected Vehicles (CVs) algorithm before their release and deployment. Current evaluation approach mainly relies on simulation platform with the single-vehicle driving model. The main drawback of it is the lack of network realism. To overcome this problem, we extract naturalistic V2V encounters data from the database, and then separate the primary vehicle encounter category by clustering. A fast mining algorithm is proposed that can be applied to parallel query for further process acceleration. 4,500 encounters are mined from a 275 GB database collected in the Safety Pilot Model Program in Ann Arbor Michigan, USA. K-means and Dynamic Time Warping (DTW) are used in clustering. Results show this method can quickly mine and cluster primary driving scenarios from a large database. Our results separate the car-following, intersection and by-passing, which are the primary category of the vehicle encounter. We anticipate the work in the essay can become a general method to effectively extract vehicle encounters from any existing database that contains vehicular GPS information. Whats more, the naturalistic data of different vehicle encounters can be applied in Connected Vehicles evaluation.
The use of naturalistic driving studies (NDSs) for driver behavior research has skyrocketed over the past two decades. Intersections are a key target for traffic safety, with up to 25-percent of fatalities and 50-percent injuries from traffic crashes in the United States occurring at intersections. NDSs are increasingly being used to assess driver behavior at intersections and devise strategies to improve intersection safety. A common challenge in NDS intersection research is the need for to combine spatial locations of driver-visited intersections with concurrent video clips of driver trajectories at intersections to extract analysis variables. The intersection identification and driver trajectory video clip extraction process are generally complex and repetitive. We developed a novel R package called ndsintxn to streamline this process and automate best practices to minimize computational time, cost, and manual labor. This paper provides details on the methods and illustrative examples used in the ndsintxn R package.
3D point-clouds and 2D images are different visual representations of the physical world. While human vision can understand both representations, computer vision models designed for 2D image and 3D point-cloud understanding are quite different. Our p aper investigates the potential for transferability between these two representations by empirically investigating whether this approach works, what factors affect the transfer performance, and how to make it work even better. We discovered that we can indeed use the same neural net model architectures to understand both images and point-clouds. Moreover, we can transfer pretrained weights from image models to point-cloud models with minimal effort. Specifically, based on a 2D ConvNet pretrained on an image dataset, we can transfer the image model to a point-cloud model by textit{inflating} 2D convolutional filters to 3D then finetuning its input, output, and optionally normalization layers. The transferred model can achieve competitive performance on 3D point-cloud classification, indoor and driving scene segmentation, even beating a wide range of point-cloud models that adopt task-specific architectures and use a variety of tricks.
The objective of this work is human pose estimation in videos, where multiple frames are available. We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow. To this end we propose a network architecture with the following novelties: (i) a deeper network than previously investigated for regressing heatmaps; (ii) spatial fusion layers that learn an implicit spatial model; (iii) optical flow is used to align heatmap predictions from neighbouring frames; and (iv) a final parametric pooling layer which learns to combine the aligned heatmaps into a pooled confidence map. We show that this architecture outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion. The new architecture outperforms the state of the art by a large margin on three video pose estimation datasets, including the very challenging Poses in the Wild dataset, and outperforms other deep methods that dont use a graphical model on the single-image FLIC benchmark (and also Chen & Yuille and Tompson et al. in the high precision region).
238 - R. Maqsood , UI. Bajwa , G. Saleem 2021
Anomalous activity recognition deals with identifying the patterns and events that vary from the normal stream. In a surveillance paradigm, these events range from abuse to fighting and road accidents to snatching, etc. Due to the sparse occurrence o f anomalous events, anomalous activity recognition from surveillance videos is a challenging research task. The approaches reported can be generally categorized as handcrafted and deep learning-based. Most of the reported studies address binary classification i.e. anomaly detection from surveillance videos. But these reported approaches did not address other anomalous events e.g. abuse, fight, road accidents, shooting, stealing, vandalism, and robbery, etc. from surveillance videos. Therefore, this paper aims to provide an effective framework for the recognition of different real-world anomalies from videos. This study provides a simple, yet effective approach for learning spatiotemporal features using deep 3-dimensional convolutional networks (3D ConvNets) trained on the University of Central Florida (UCF) Crime video dataset. Firstly, the frame-level labels of the UCF Crime dataset are provided, and then to extract anomalous spatiotemporal features more efficiently a fine-tuned 3D ConvNets is proposed. Findings of the proposed study are twofold 1)There exist specific, detectable, and quantifiable features in UCF Crime video feed that associate with each other 2) Multiclass learning can improve generalizing competencies of the 3D ConvNets by effectively learning frame-level information of dataset and can be leveraged in terms of better results by applying spatial augmentation.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا