ترغب بنشر مسار تعليمي؟ اضغط هنا

Real-Time Navigation System for a Low-Cost Mobile Robot with an RGB-D Camera

72   0   0.0 ( 0 )
 نشر من قبل Taekyung Kim
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Currently, mobile robots are developing rapidly and are finding numerous applications in industry. However, there remain a number of problems related to their practical use, such as the need for expensive hardware and their high power consumption levels. In this study, we propose a navigation system that is operable on a low-end computer with an RGB-D camera and a mobile robot platform for the operation of an integrated autonomous driving system. The proposed system does not require LiDARs or a GPU. Our raw depth image ground segmentation approach extracts a traversability map for the safe driving of low-body mobile robots. It is designed to guarantee real-time performance on a low-cost commercial single board computer with integrated SLAM, global path planning, and motion planning. Running sensor data processing and other autonomous driving functions simultaneously, our navigation method performs rapidly at a refresh rate of 18Hz for control command, whereas other systems have slower refresh rates. Our method outperforms current state-of-the-art navigation approaches as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in a residential lobby.

قيم البحث

اقرأ أيضاً

The use of delivery services is an increasing trend worldwide, further enhanced by the COVID pandemic. In this context, drone delivery systems are of great interest as they may allow for faster and cheaper deliveries. This paper presents a navigation system that makes feasible the delivery of parcels with autonomous drones. The system generates a path between a start and a final point and controls the drone to follow this path based on its localization obtained through GPS, 9DoF IMU, and barometer. In the landing phase, information of poses estimated by a marker (ArUco) detection technique using a camera, ultra-wideband (UWB) devices, and the drones software estimation are merged by utilizing an Extended Kalman Filter algorithm to improve the landing precision. A vector field-based method controls the drone to follow the desired path smoothly, reducing vibrations or harsh movements that could harm the transported parcel. Real experiments validate the delivery strategy and allow to evaluate the performance of the adopted techniques. Preliminary results state the viability of our proposal for autonomous drone delivery.
Nasopharyngeal (NP) swab sampling is an effective approach for the diagnosis of coronavirus disease 2019 (COVID-19). Medical staffs carrying out the task of collecting NP specimens are in close contact with the suspected patient, thereby posing a hig h risk of cross-infection. We propose a low-cost miniature robot that can be easily assembled and remotely controlled. The system includes an active end-effector, a passive positioning arm, and a detachable swab gripper with integrated force sensing capability. The cost of the materials for building this robot is 55 USD and the total weight of the functional part is 0.23kg. The design of the force sensing swab gripper was justified using Finite Element (FE) modeling and the performances of the robot were validated with a simulation phantom and three pig noses. FE analysis indicated a 0.5mm magnitude displacement of the grippers sensing beam, which meets the ideal detecting range of the optoelectronic sensor. Studies on both the phantom and the pig nose demonstrated the successful operation of the robot during the collection task. The average forces were found to be 0.35N and 0.85N, respectively. It is concluded that the proposed robot is promising and could be further developed to be used in vivo.
This paper proposes a method to navigate a mobile robot by estimating its state over a number of distributed sensor networks (DSNs) such that it can successively accomplish a sequence of tasks, i.e., its state enters each targeted set and stays insid e no less than the desired time, under a resource-aware, time-efficient, and computation- and communication-constrained setting.We propose a new robot state estimation and navigation architecture, which integrates an event-triggered task-switching feedback controller for the robot and a two-time-scale distributed state estimator for each sensor. The architecture has three major advantages over existing approaches: First, in each task only one DSN is active for sensing and estimating the robot state, and for different tasks the robot can switch the active DSN by taking resource saving and system performance into account; Second, the robot only needs to communicate with one active sensor at each time to obtain its state information from the active DSN; Third, no online optimization is required. With the controller, the robot is able to accomplish a task by following a reference trajectory and switch to the next task when an event-triggered condition is fulfilled. With the estimator, each active sensor is able to estimate the robot state. Under proper conditions, we prove that the state estimation error and the trajectory tracking deviation are upper bounded by two time-varying sequences respectively, which play an essential role in the event-triggered condition. Furthermore, we find a sufficient condition for accomplishing a task and provide an upper bound of running time for the task. Numerical simulations of an indoor robots localization and navigation are provided to validate the proposed architecture.
In-pipe robots are promising solutions for condition assessment, leak detection, water quality monitoring in a variety of other tasks in pipeline networks. Smart navigation is an extremely challenging task for these robots as a result of highly uncer tain and disturbing environment for operation. Wireless communication to control these robots during operation is not feasible if the pipe material is metal since the radio signals are destroyed in the pipe environment, and hence, this challenge is still unsolved. In this paper, we introduce a method for smart navigation for our previously designed in-pipe robot [1] based on particle filtering and a two-phase motion controller. The robot is given the map of the operation path with a novel approach and the particle filtering determines the straight and non-straight configurations of the pipeline. In the straight paths, the robot follows a linear quadratic regulator (LQR) and proportional-integral-derivative (PID) based controller that stabilizes the robot and tracks a desired velocity. In non-straight paths, the robot follows the trajectory that a motion trajectory generator block plans for the robot. The proposed method is a promising solution for smart navigation without the need for wireless communication and capable of inspecting long distances in water distribution systems.
This paper considers the problem of robot motion planning in a workspace with obstacles for systems with uncertain 2nd-order dynamics. In particular, we combine closed form potential-based feedback controllers with adaptive control techniques to guar antee the collision-free robot navigation to a predefined goal while compensating for the dynamic model uncertainties. We base our findings on sphere world-based configuration spaces, but extend our results to arbitrary star-shaped environments by using previous results on configuration space transformations. Moreover, we propose an algorithm for extending the control scheme to decentralized multi-robot systems. Finally, extensive simulation results verify the theoretical findings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا