ترغب بنشر مسار تعليمي؟ اضغط هنا

Sensor Fusion of Camera and Cloud Digital Twin Information for Intelligent Vehicles

271   0   0.0 ( 0 )
 نشر من قبل Ziran Wang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the rapid development of intelligent vehicles and Advanced Driving Assistance Systems (ADAS), a mixed level of human driver engagements is involved in the transportation system. Visual guidance for drivers is essential under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel sensor fusion methodology, integrating camera image and Digital Twin knowledge from the cloud. Target vehicle bounding box is drawn and matched by combining results of object detector running on ego vehicle and position information from the cloud. The best matching result, with a 79.2% accuracy under 0.7 Intersection over Union (IoU) threshold, is obtained with depth image served as an additional feature source. Game engine-based simulation results also reveal that the visual guidance system could improve driving safety significantly cooperate with the cloud Digital Twin system.

قيم البحث

اقرأ أيضاً

To navigate through urban roads, an automated vehicle must be able to perceive and recognize objects in a three-dimensional environment. A high-level contextual understanding of the surroundings is necessary to plan and execute accurate driving maneu vers. This paper presents an approach to fuse different sensory information, Light Detection and Ranging (lidar) scans and camera images. The output of a convolutional neural network (CNN) is used as classifier to obtain the labels of the environment. The transference of semantic information between the labelled image and the lidar point cloud is performed in four steps: initially, we use heuristic methods to associate probabilities to all the semantic classes contained in the labelled images. Then, the lidar points are corrected to compensate for the vehicles motion given the difference between the timestamps of each lidar scan and camera image. In a third step, we calculate the pixel coordinate for the corresponding camera image. In the last step we perform the transfer of semantic information from the heuristic probability images to the lidar frame, while removing the lidar information that is not visible to the camera. We tested our approach in the Usyd Dataset cite{usyd_dataset}, obtaining qualitative and quantitative results that demonstrate the validity of our probabilistic sensory fusion approach.
112 - Tianci Yang , Chen Lv 2021
By using various sensors to measure the surroundings and sharing local sensor information with the surrounding vehicles through wireless networks, connected and automated vehicles (CAVs) are expected to increase safety, efficiency, and capacity of ou r transportation systems. However, the increasing usage of sensors has also increased the vulnerability of CAVs to sensor faults and adversarial attacks. Anomalous sensor values resulting from malicious cyberattacks or faulty sensors may cause severe consequences or even fatalities. In this paper, we increase the resilience of CAVs to faults and attacks by using multiple sensors for measuring the same physical variable to create redundancy. We exploit this redundancy and propose a sensor fusion algorithm for providing a robust estimate of the correct sensor information with bounded errors independent of the attack signals, and for attack detection and isolation. The proposed sensor fusion framework is applicable to a large class of security-critical Cyber-Physical Systems (CPSs). To minimize the performance degradation resulting from the usage of estimation for control, we provide an $H_{infty}$ controller for CACC-equipped CAVs capable of stabilizing the closed-loop dynamics of each vehicle in the platoon while reducing the joint effect of estimation errors and communication channel noise on the tracking performance and string behavior of the vehicle platoon. Numerical examples are presented to illustrate the effectiveness of our methods.
A large number of sensors deployed in recent years in various setups and their data is readily available in dedicated databases or in the cloud. Of particular interest is real-time data processing and 3D visualization in web-based user interfaces tha t facilitate spatial information understanding and sharing, hence helping the decision making process for all the parties involved. In this research, we provide a prototype system for near real-time, continuous X3D-based visualization of processed sensor data for two significant applications: thermal monitoring for residential/commercial buildings and nitrogen cycle monitoring in water beds for aquaponics systems. As sensors are sparsely placed, in each application, where they collect data for large periods (of up to one year), we employ a Finite Differences Method and a Neural Networks model to approximate data distribution in the entire volume.
We consider the fusion of two aerodynamic data sets originating from differing fidelity physical or computer experiments. We specifically address the fusion of: 1) noisy and in-complete fields from wind tunnel measurements and 2) deterministic but bi ased fields from numerical simulations. These two data sources are fused in order to estimate the emph{true} field that best matches measured quantities that serves as the ground truth. For example, two sources of pressure fields about an aircraft are fused based on measured forces and moments from a wind-tunnel experiment. A fundamental challenge in this problem is that the true field is unknown and can not be estimated with 100% certainty. We employ a Bayesian framework to infer the true fields conditioned on measured quantities of interest; essentially we perform a emph{statistical correction} to the data. The fused data may then be used to construct more accurate surrogate models suitable for early stages of aerospace design. We also introduce an extension of the Proper Orthogonal Decomposition with constraints to solve the same problem. Both methods are demonstrated on fusing the pressure distributions for flow past the RAE2822 airfoil and the Common Research Model wing at transonic conditions. Comparison of both methods reveal that the Bayesian method is more robust when data is scarce while capable of also accounting for uncertainties in the data. Furthermore, given adequate data, the POD based and Bayesian approaches lead to emph{similar} results.
The technology of dynamic map fusion among networked vehicles has been developed to enlarge sensing ranges and improve sensing accuracies for individual vehicles. This paper proposes a federated learning (FL) based dynamic map fusion framework to ach ieve high map quality despite unknown numbers of objects in fields of view (FoVs), various sensing and model uncertainties, and missing data labels for online learning. The novelty of this work is threefold: (1) developing a three-stage fusion scheme to predict the number of objects effectively and to fuse multiple local maps with fidelity scores; (2) developing an FL algorithm which fine-tunes feature models (i.e., representation learning networks for feature extraction) distributively by aggregating model parameters; (3) developing a knowledge distillation method to generate FL training labels when data labels are unavailable. The proposed framework is implemented in the Car Learning to Act (CARLA) simulation platform. Extensive experimental results are provided to verify the superior performance and robustness of the developed map fusion and FL schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا