ترغب بنشر مسار تعليمي؟ اضغط هنا

Navigation of a Self-Driving Vehicle Using One Fiducial Marker

152   0   0.0 ( 0 )
 نشر من قبل Yibo Liu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Navigation using only one marker, which contains four artificial features, is a challenging task since camera pose estimation using only four coplanar points suffers from the rotational ambiguity problem in a real-world application. This paper presents a framework of vision-based navigation for a self-driving vehicle equipped with multiple cameras and a wheel odometer. A multiple camera setup is presented for the camera cluster which has 360-degree vision such that our framework solely requires one planar marker. A Kalman-Filter-based fusion method is introduced for the multiple-camera and wheel odometry. Furthermore, an algorithm is proposed to resolve the rotational ambiguity problem using the prediction of the Kalman Filter as additional information. Finally, the lateral and longitudinal controllers are provided. Experiments are conducted to illustrate the effectiveness of the theory.



قيم البحث

اقرأ أيضاً

Robots and self-driving vehicles face a number of challenges when navigating through real environments. Successful navigation in dynamic environments requires prioritizing subtasks and monitoring resources. Animals are under similar constraints. It h as been shown that the neuromodulator serotonin regulates impulsiveness and patience in animals. In the present paper, we take inspiration from the serotonergic system and apply it to the task of robot navigation. In a set of outdoor experiments, we show how changing the level of patience can affect the amount of time the robot will spend searching for a desired location. To navigate GPS compromised environments, we introduce a deep reinforcement learning paradigm in which the robot learns to follow sidewalks. This may further regulate a tradeoff between a smooth long route and a rough shorter route. Using patience as a parameter may be beneficial for autonomous systems under time pressure.
Multi-vehicle interaction behavior classification and analysis offer in-depth knowledge to make an efficient decision for autonomous vehicles. This paper aims to cluster a wide range of driving encounter scenarios based only on multi-vehicle GPS traj ectories. Towards this end, we propose a generic unsupervised learning framework comprising two layers: feature representation layer and clustering layer. In the layer of feature representation, we combine the deep autoencoders with a distance-based measure to map the sequential observations of driving encounters into a computationally tractable space that allows quantifying the spatiotemporal interaction characteristics of two vehicles. The clustering algorithm is then applied to the extracted representations to gather homogeneous driving encounters into groups. Our proposed generic framework is then evaluated using 2,568 naturalistic driving encounters. Experimental results demonstrate that our proposed generic framework incorporated with unsupervised learning can cluster multi-trajectory data into distinct groups. These clustering results could benefit decision-making policy analysis and design for autonomous vehicles.
In autonomous driving, navigation through unsignaled intersections with many traffic participants moving around is a challenging task. To provide a solution to this problem, we propose a novel branched network G-CIL for the navigation policy learning . Specifically, we firstly represent such dynamic environments as graph-structured data and propose an effective strategy for edge definition to aggregate surrounding information for the ego-vehicle. Then graph convolutional neural networks are used as the perception module to capture global and geometric features from the environment. To generate safe and efficient navigation policy, we further incorporate it with conditional imitation learning algorithm, to learn driving behaviors directly from expert demonstrations. Our proposed network is capable of handling a varying number of surrounding vehicles and generating optimal control actions (e.g., steering angle and throttle) according to the given high-level commands (e.g., turn left towards the global goal). Evaluations on unsignaled intersections with various traffic densities demonstrate that our end-to-end trainable neural network outperforms the baselines with higher success rate and shorter navigation time.
Purpose: Implanted fiducial markers are often used in radiotherapy to facilitate accurate visualization and localization of tumors. Typically, such markers are used to aid daily patient positioning and to verify the targets position during treatment. This work introduces a novel, automated method for identifying fiducial markers in planar x-ray imaging. Methods: In brief, the method consists of automated filtration and reconstruction steps that generate 3D templates of marker positions. The normalized cross-correlation was the used to identify fiducial markers in projection images. To quantify the accuracy of the technique, a phantom study was performed. 75 pre-treatment CBCT scans of 15 pancreatic cancer patients were analyzed to test the automated technique under real life conditions, including several challenging scenarios for tracking fiducial markers. Results: In phantom and patient studies, the method automatically tracked visible marker clusters in 100% of projection images. For scans in which a phantom exhibited 0D, 1D, and 3D motion, the automated technique showed median errors of 39 $mu$m, 53 $mu$m, and 93 $mu$m, respectively. Human precision was worse in comparison. Automated tracking was performed accurately despite the presence of other metallic objects. Additionally, transient differences in the cross-correlation score identified instances where markers disappeared from view. Conclusions: A novel, automated method for producing dynamic templates of fiducial marker clusters has been developed. Production of these templates automatically provides measurements of tumor motion that occurred during the CBCT scan that was used to produce them. Additionally, using these templates with intra-fractional images could potentially allow for more robust real-time target tracking in radiotherapy.
Motion blur can impede marker detection and marker-based pose estimation, which is common in real-world robotic applications involving fiducial markers. To solve this problem, we propose a novel lightweight generative adversarial network (GAN), Ghost -DeblurGAN, for real-time motion deblurring. Furthermore, a new large-scale dataset, YorkTag, provides pairs of sharp/blurred images containing fiducial markers and is proposed to train and qualitatively and quantitatively evaluate our model. Experimental results demonstrate that when applied along with fudicual marker systems to motion-blurred images, Ghost-DeblurGAN improves the marker detection significantly and mitigates the rotational ambiguity problem in marker-based pose estimation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا