ترغب بنشر مسار تعليمي؟ اضغط هنا

Accurate position tracking with a single UWB anchor

53   0   0.0 ( 0 )
 نشر من قبل Yanjun Cao
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Accurate localization and tracking are a fundamental requirement for robotic applications. Localization systems like GPS, optical tracking, simultaneous localization and mapping (SLAM) are used for daily life activities, research, and commercial applications. Ultra-wideband (UWB) technology provides another venue to accurately locate devices both indoors and outdoors. In this paper, we study a localization solution with a single UWB anchor, instead of the traditional multi-anchor setup. Besides the challenge of a single UWB ranging source, the only other sensor we require is a low-cost 9 DoF inertial measurement unit (IMU). Under such a configuration, we propose continuous monitoring of UWB range changes to estimate the robot speed when moving on a line. Combining speed estimation with orientation estimation from the IMU sensor, the system becomes temporally observable. We use an Extended Kalman Filter (EKF) to estimate the pose of a robot. With our solution, we can effectively correct the accumulated error and maintain accurate tracking of a moving robot.



قيم البحث

اقرأ أيضاً

Visual object tracking is an important application of computer vision. Recently, Siamese based trackers have achieved good accuracy. However, most of Siamese based trackers are not efficient, as they exhaustively search potential object locations to define anchors and then classify each anchor (i.e., a bounding box). This paper develops the first Anchor Free Siamese Network (AFSN). Specifically, a target object is defined by a bounding box center, tracking offset, and object size. All three are regressed by Siamese network with no additional classification or regional proposal, and performed once for each frame. We also tune the stride and receptive field for Siamese network, and further perform ablation experiments to quantitatively illustrate the effectiveness of our AFSN. We evaluate AFSN using five most commonly used benchmarks and compare to the best anchor-based trackers with source codes available for each benchmark. AFSN is 3-425 times faster than these best anchor based trackers. AFSN is also 5.97% to 12.4% more accurate in terms of all metrics for benchmark sets OTB2015, VOT2015, VOT2016, VOT2018 and TrackingNet, except that SiamRPN++ is 4% better than AFSN in terms of Expected Average Overlap (EAO) on VOT2018 (but SiamRPN++ is 3.9 times slower).
Soft pneumatic legged robots show promise in their ability to traverse a range of different types of terrain, including natural unstructured terrain met in applications like precision agriculture. They can adapt their body morphology to the intricaci es of the terrain at hand, thus enabling robust and resilient locomotion. In this paper we capitalize upon recent developments on soft pneumatic legged robots to introduce a closed-loop trajectory tracking control scheme for operation over flat ground. Closed-loop pneumatic actuation feedback is achieved via a compact and portable pneumatic regulation board. Experimental results reveal that our soft legged robot can precisely control its body height and orientation while in quasi-static operation based on a geometric model. The robot can track both straight line and curved trajectories as well as variable-height trajectories. This work lays the basis to enable autonomous navigation for soft legged robots.
Grasp is an essential skill for robots to interact with humans and the environment. In this paper, we build a vision-based, robust and real-time robotic grasp approach with fully convolutional neural network. The main component of our approach is a g rasp detection network with oriented anchor boxes as detection priors. Because the orientation of detected grasps is significant, which determines the rotation angle configuration of the gripper, we propose the Orientation Anchor Box Mechanism to regress grasp angle based on predefined assumption instead of classification or regression without any priors. With oriented anchor boxes, the grasps can be predicted more accurately and efficiently. Besides, to accelerate the network training and further improve the performance of angle regression, Angle Matching is proposed during training instead of Jaccard Index Matching. Five-fold cross validation results demonstrate that our proposed algorithm achieves an accuracy of 98.8% and 97.8% in image-wise split and object-wise split respectively, and the speed of our detection algorithm is 67 FPS with GTX 1080Ti, outperforming all the current state-of-the-art grasp detection algorithms on Cornell Dataset both in speed and accuracy. Robotic experiments demonstrate the robustness and generalization ability in unseen objects and real-world environment, with the average success rate of 90.0% and 84.2% of familiar things and unseen things respectively on Baxter robot platform.
133 - Qin Shi , Xiaowei Cui , Wei Li 2019
Navigation applications relying on the Global Navigation Satellite System (GNSS) are limited in indoor environments and GNSS-denied outdoor terrains such as dense urban or forests. In this paper, we present a novel accurate, robust and low-cost GNSS- independent navigation system, which is composed of a monocular camera and Ultra-wideband (UWB) transceivers. Visual techniques have gained excellent results when computing the incremental motion of the sensor, and UWB methods have proved to provide promising localization accuracy due to the high time resolution of the UWB ranging signals. However, the monocular visual techniques with scale ambiguity are not suitable for applications requiring metric results, and UWB methods assume that the positions of the UWB transceiver anchor are pre-calibrated and known, thus precluding their application in unknown and challenging environments. To this end, we advocate leveraging the monocular camera and UWB to create a map of visual features and UWB anchors. We propose a visual-UWB Simultaneous Localization and Mapping (SLAM) algorithm which tightly combines visual and UWB measurements to form a joint non-linear optimization problem on Lie-Manifold. The 6 Degrees of Freedom (DoF) state of the vehicles and the map are estimated by minimizing the UWB ranging errors and landmark reprojection errors. Our navigation system starts with an exploratory task which performs the real-time visual-UWB SLAM to obtain the global map, then the navigation task by reusing this global map. The tasks can be performed by different vehicles in terms of equipped sensors and payload capability in a heterogeneous team. We validate our system on the public datasets, achieving typical centimeter accuracy and 0.1% scale error.
The design and development of swarms of micro-aerial vehicles (MAVs) has recently gained significant traction. Collaborative aerial swarms have potential applications in areas as diverse as surveillance and monitoring, inventory management, search an d rescue, or in the entertainment industry. Swarm intelligence has, by definition, a distributed nature. Yet performing experiments in truly distributed systems is not always possible, as much of the underlying ecosystem employed requires some sort of central control. Indeed, in experimental proofs of concept, most research relies on more traditional connectivity solutions and centralized approaches. External localization solutions, such as motion capture (MOCAP) systems, visual markers, or ultra-wideband (UWB) anchors are often used. Alternatively, intra-swarm solutions are often limited in terms of, e.g., range or field-of-view. Research and development has been supported by platforms such as the e-puck, the kilobot, or the crazyflie quadrotors. We believe there is a need for inexpensive platforms such as the Crazyflie with more advanced onboard processing capabilities and sensors, while offering scalability and robust communication and localization solutions. In the following, we present a platform for research and development in aerial swarms currently under development, where we leverage Wi-Fi mesh connectivity and the distributed ROS2 middleware together with UWB ranging and communication for situated communication. We present a platform for building towards large-scale swarms of autonomous MAVs leveraging the ROS2 middleware, Wi-Fi mesh connectivity, and UWB ranging and communication. The platform is based on the Ryze Tello Drone, a Raspberry Pi Zero W as a companion computer together with a camera module, and a Decawave DWM1001 UWB module for ranging and basic communication.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا