Do you want to publish a course? Click here

Navigating A Mobile Robot Using Switching Distributed Sensor Networks

135   0   0.0 ( 0 )
 Added by Xingkang He
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

This paper proposes a method to navigate a mobile robot by estimating its state over a number of distributed sensor networks (DSNs) such that it can successively accomplish a sequence of tasks, i.e., its state enters each targeted set and stays inside no less than the desired time, under a resource-aware, time-efficient, and computation- and communication-constrained setting.We propose a new robot state estimation and navigation architecture, which integrates an event-triggered task-switching feedback controller for the robot and a two-time-scale distributed state estimator for each sensor. The architecture has three major advantages over existing approaches: First, in each task only one DSN is active for sensing and estimating the robot state, and for different tasks the robot can switch the active DSN by taking resource saving and system performance into account; Second, the robot only needs to communicate with one active sensor at each time to obtain its state information from the active DSN; Third, no online optimization is required. With the controller, the robot is able to accomplish a task by following a reference trajectory and switch to the next task when an event-triggered condition is fulfilled. With the estimator, each active sensor is able to estimate the robot state. Under proper conditions, we prove that the state estimation error and the trajectory tracking deviation are upper bounded by two time-varying sequences respectively, which play an essential role in the event-triggered condition. Furthermore, we find a sufficient condition for accomplishing a task and provide an upper bound of running time for the task. Numerical simulations of an indoor robots localization and navigation are provided to validate the proposed architecture.



rate research

Read More

This paper investigates the online motion coordination problem for a group of mobile robots moving in a shared workspace. Based on the realistic assumptions that each robot is subject to both velocity and input constraints and can have only local view and local information, a fully distributed multi-robot motion coordination strategy is proposed. Building on top of a cell decomposition, a conflict detection algorithm is presented first. Then, a rule is proposed to assign dynamically a planning order to each pair of neighboring robots, which is deadlock-free. Finally, a two-step motion planning process that combines fixed-path planning and trajectory planning is designed. The effectiveness of the resulting solution is verified by a simulation example.
Currently, mobile robots are developing rapidly and are finding numerous applications in industry. However, there remain a number of problems related to their practical use, such as the need for expensive hardware and their high power consumption levels. In this study, we propose a navigation system that is operable on a low-end computer with an RGB-D camera and a mobile robot platform for the operation of an integrated autonomous driving system. The proposed system does not require LiDARs or a GPU. Our raw depth image ground segmentation approach extracts a traversability map for the safe driving of low-body mobile robots. It is designed to guarantee real-time performance on a low-cost commercial single board computer with integrated SLAM, global path planning, and motion planning. Running sensor data processing and other autonomous driving functions simultaneously, our navigation method performs rapidly at a refresh rate of 18Hz for control command, whereas other systems have slower refresh rates. Our method outperforms current state-of-the-art navigation approaches as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in a residential lobby.
This paper investigates the online motion coordination problem for a group of mobile robots moving in a shared workspace, each of which is assigned a linear temporal logic specification. Based on the realistic assumptions that each robot is subject to both state and input constraints and can have only local view and local information, a fully distributed multi-robot motion coordination strategy is proposed. For each robot, the motion coordination strategy consists of three layers. An offline layer pre-computes the braking area for each region in the workspace, the controlled transition system, and a so-called potential function. An initialization layer outputs an initially safely satisfying trajectory. An online coordination layer resolves conflicts when one occurs. The online coordination layer is further decomposed into three steps. Firstly, a conflict detection algorithm is implemented, which detects conflicts with neighboring robots. Whenever conflicts are detected, a rule is designed to assign dynamically a planning order to each pair of neighboring robots. Finally, a sampling-based algorithm is designed to generate local collision-free trajectories for the robot which at the same time guarantees the feasibility of the specification. Safety is proven to be guaranteed for all robots at any time. The effectiveness and the computational tractability of the resulting solution is verified numerically by two case studies.
Goal: This paper presents an algorithm for accurately estimating pelvis, thigh, and shank kinematics during walking using only three wearable inertial sensors. Methods: The algorithm makes novel use of a constrained Kalman filter (CKF). The algorithm iterates through the prediction (kinematic equation), measurement (pelvis position pseudo-measurements, zero velocity update, flat-floor assumption, and covariance limiter), and constraint update (formulation of hinged knee joints and ball-and-socket hip joints). Results: Evaluation of the algorithm using an optical motion capture-based sensor-to-segment calibration on nine participants ($7$ men and $2$ women, weight $63.0 pm 6.8$ kg, height $1.70 pm 0.06$ m, age $24.6 pm 3.9$ years old), with no known gait or lower body biomechanical abnormalities, who walked within a $4 times 4$ m$^2$ capture area shows that it can track motion relative to the mid-pelvis origin with mean position and orientation (no bias) root-mean-square error (RMSE) of $5.21 pm 1.3$ cm and $16.1 pm 3.2^circ$, respectively. The sagittal knee and hip joint angle RMSEs (no bias) were $10.0 pm 2.9^circ$ and $9.9 pm 3.2^circ$, respectively, while the corresponding correlation coefficient (CC) values were $0.87 pm 0.08$ and $0.74 pm 0.12$. Conclusion: The CKF-based algorithm was able to track the 3D pose of the pelvis, thigh, and shanks using only three inertial sensors worn on the pelvis and shanks. Significance: Due to the Kalman-filter-based algorithms low computation cost and the relative convenience of using only three wearable sensors, gait parameters can be computed in real-time and remotely for long-term gait monitoring. Furthermore, the system can be used to inform real-time gait assistive devices.
127 - Sam Safavi , Usman Khan 2017
In this paper, we develop a textcolor{black}{emph{distributed}} algorithm to localize a network of robots moving arbitrarily in a bounded region. In the case of such mobile networks, the main challenge is that the robots may not be able to find nearby robots to implement a distributed algorithm. We address this issue by providing an opportunistic algorithm that only implements a location update when there are nearby robots and does not update otherwise. We assume that each robot measures a noisy version of its motion and the distances to the nearby robots. To localize a network of mobile robots in~$mathbb{R}^m$, we provide a simple emph{linear} update, which is based on barycentric coordinates and is linear-convex. We abstract the corresponding localization algorithm as a Linear Time-Varying (LTV) system and show that it asymptotically converges to the true locations~of~the robots. We first focus on the noiseless case, where the distance and motion vectors are known (measured) perfectly, and provide sufficient conditions on the convergence of the algorithm. We then evaluate the performance of the algorithm in the presence of noise and provide modifications to counter the undesirable effects of noise. textcolor{black}{We further show that our algorithm precisely tracks a mobile network as long as there is at least one known beacon (a node whose location is perfectly known).
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا