Do you want to publish a course? Click here

Low-viewpoint forest depth dataset for sparse rover swarms

85   0   0.0 ( 0 )
 Added by Chaoyue Niu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Rapid progress in embedded computing hardware increasingly enables on-board image processing on small robots. This development opens the path to replacing costly sensors with sophisticated computer vision techniques. A case in point is the prediction of scene depth information from a monocular camera for autonomous navigation. Motivated by the aim to develop a robot swarm suitable for sensing, monitoring, and search applications in forests, we have collected a set of RGB images and corresponding depth maps. Over 100k images were recorded with a custom rig from the perspective of a small ground rover moving through a forest. Taken under different weather and lighting conditions, the images include scenes with grass, bushes, standing and fallen trees, tree branches, leafs, and dirt. In addition GPS, IMU, and wheel encoder data was recorded. From the calibrated, synchronized, aligned and timestamped frames about 9700 image-depth map pairs were selected for sharpness and variety. We provide this dataset to the community to fill a need identified in our own research and hope it will accelerate progress in robots navigating the challenging forest environment. This paper describes our custom hardware and methodology to collect the data, subsequent processing and quality of the data, and how to access it.



rate research

Read More

Robot swarms to date are not prepared for autonomous navigation such as path planning and obstacle detection in forest floor, unable to achieve low-cost. The development of depth sensing and embedded computing hardware paves the way for swarm of terrestrial robots. The goal of this research is to improve this situation by developing low cost vision system for small ground robots to rapidly perceive terrain. We develop two depth estimation models and evaluate their performance on Raspberry Pi 4 and Jetson Nano in terms of accuracy, runtime and model size of depth estimation models, as well as memory consumption, power draw, temperature, and cost of above two embedded on-board computers. Our research demonstrated that auto-encoder network deployed on Raspberry Pi 4 runs at a power consumption of 3.4 W, memory consumption of about 200 MB, and mean runtime of 13 ms. This can be to meet our requirement for low-cost swarm of robots. Moreover, our analysis also indicated multi-scale deep network performs better for predicting depth map from blurred RGB images caused by camera motion. This paper mainly describes depth estimation models trained on our own dataset recorded in forest, and their performance on embedded on-board computers.
While an exciting diversity of new imaging devices is emerging that could dramatically improve robotic perception, the challenges of calibrating and interpreting these cameras have limited their uptake in the robotics community. In this work we generalise techniques from unsupervised learning to allow a robot to autonomously interpret new kinds of cameras. We consider emerging sparse light field (LF) cameras, which capture a subset of the 4D LF function describing the set of light rays passing through a plane. We introduce a generalised encoding of sparse LFs that allows unsupervised learning of odometry and depth. We demonstrate the proposed approach outperforming monocular and conventional techniques for dealing with 4D imagery, yielding more accurate odometry and depth maps and delivering these with metric scale. We anticipate our technique to generalise to a broad class of LF and sparse LF cameras, and to enable unsupervised recalibration for coping with shifts in camera behaviour over the lifetime of a robot. This work represents a first step toward streamlining the integration of new kinds of imaging devices in robotics applications.
98 - Tad Hogg 2021
Ultrasound can power implanted medical devices. This paper evaluates its feasibility for microscopic robots in tissue that mechanically harvest energy using pistons. At these sizes, viscous drag dominates the piston motion and acoustic attenuation limits how far power can reach. Combining these factors shows that frequencies around 100kHz can deliver hundreds of picowatts to well-separated micron-size robots in low-attenuation tissues within about 10cm of the skin. However, applications of microscopic robots could involve large numbers, in which case the robots themselves significantly increase acoustic attenuation. Robots can mitigate this attenuation using cooperative swarm behaviors, with trade-offs among individual power, group performance and the complexity of the robot controllers. With such mitigating behaviors, acoustic power can be useful for swarms of a few hundred billion robots in the body, that each use tens of picowatts, on average, and can tolerate significant variability in available power, e.g, as robots in the bloodstream move from near the skin to deep within the body, or from low- to high-attenuation tissue such as the lungs.
In addition to conventional ground rovers, the Mars 2020 mission will send a helicopter to Mars. The copters high-resolution data helps the rover to identify small hazards such as steps and pointy rocks, as well as providing rich textual information useful to predict perception performance. In this paper, we consider a three-agent system composed of a Mars rover, copter, and orbiter. The objective is to provide good localization to the rover by selecting an optimal path that minimizes the localization uncertainty accumulation during the rovers traverse. To achieve this goal, we quantify the localizability as a goodness measure associated with the map, and conduct a joint-space search over rovers path and copters perceptual actions given prior information from the orbiter. We jointly address where to map by the copter and where to drive by the rover using the proposed iterative copter-rover path planner. We conducted numerical simulations using the map of Mars 2020 landing site to demonstrate the effectiveness of the proposed planner.
84 - Yi Guo , Ji Liu 2020
Guided sparse depth upsampling aims to upsample an irregularly sampled sparse depth map when an aligned high-resolution color image is given as guidance. Many neural networks have been designed for this task. However, they often ignore the structural difference between the depth and the color image, resulting in obvious artifacts such as texture copy and depth blur at the upsampling depth. Inspired by the normalized convolution operation, we propose a guided convolutional layer to recover dense depth from sparse and irregular depth image with an depth edge image as guidance. Our novel guided network can prevent the depth value from crossing the depth edge to facilitate upsampling. We further design a convolution network based on proposed convolutional layer to combine the advantages of different algorithms and achieve better performance. We conduct comprehensive experiments to verify our method on real-world indoor and synthetic outdoor datasets. Our method produces strong results. It outperforms state-of-the-art methods on the Virtual KITTI dataset and the Middlebury dataset. It also presents strong generalization capability under different 3D point densities, various lighting and weather conditions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا