ﻻ يوجد ملخص باللغة العربية
Fully autonomous driving systems require fast detection and recognition of sensitive objects in the environment. In this context, intelligent vehicles should share their sensor data with computing platforms and/or other vehicles, to detect objects beyond their own sensors fields of view. However, the resulting huge volumes of data to be exchanged can be challenging to handle for standard communication technologies. In this paper, we evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate. The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel, with negligible degradation in terms of object detection accuracy. To this aim, we extend an already available object detection algorithm so that it can consider, as an input, camera images, LiDAR point clouds, or a combination of the two, and compare the accuracy performance of the different approaches using two realistic datasets. Our results show that, although sensor fusion always achieves more accurate detections, LiDAR only inputs can obtain similar results for large objects while mitigating the burden on the channel.
Recent advances in the integration of vehicular sensor network (VSN) technology, and crowd sensing leveraging pervasive sensors called onboard units (OBUs), like smartphones and radio frequency IDentifications to provide sensing services, have attrac
Extreme events and disasters resulting from climate change or other ecological factors are difficult to predict and manage. Current limitations of state-of-the-art approaches to disaster prediction and management could be addressed by adopting new un
A distributed spiral algorithm for distributed optimization in WSN is proposed. By forming a spiral-shape message passing scheme among clusters, without loss of estimation accuracy and convergence speed, the algorithm is proved to converge with a low
A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor
We study the problem of tracking an object moving through a network of wireless sensors. In order to conserve energy, the sensors may be put into a sleep mode with a timer that determines their sleep duration. It is assumed that an asleep sensor cann