Do you want to publish a course? Click here

A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU

118   0   0.0 ( 0 )
 Added by Hongzhuo Liang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we present a multimodal mobile teleoperation system that consists of a novel vision-based hand pose regression network (Transteleop) and an IMU-based arm tracking method. Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process. A keypoint-based reconstruction loss explores the resemblance in appearance and anatomy between human and robotic hands and enriches the local features of reconstructed images. A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system. Network evaluation results on a test dataset and a variety of complex manipulation tasks that go beyond simple pick-and-place operations show the efficiency and stability of our multimodal teleoperation system.



rate research

Read More

This paper presents a teleoperation system that includes robot perception and intent prediction from hand gestures. The perception module identifies the objects present in the robot workspace and the intent prediction module which object the user likely wants to grasp. This architecture allows the approach to rely on traded control instead of direct control: we use hand gestures to specify the goal objects for a sequential manipulation task, the robot then autonomously generates a grasping or a retrieving motion using trajectory optimization. The perception module relies on the model-based tracker to precisely track the 6D pose of the objects and makes use of a state of the art learning-based object detection and segmentation method, to initialize the tracker by automatically detecting objects in the scene. Goal objects are identified from user hand gestures using a trained a multi-layer perceptron classifier. After presenting all the components of the system and their empirical evaluation, we present experimental results comparing our pipeline to a direct traded control approach (i.e., one that does not use prediction) which shows that using intent prediction allows to bring down the overall task execution time.
On the base of the developed master-slave prosthetic hand-arm robot system, which is controlled mainly based on signals obtained from bending sensors fixed on the data glove, the first idea deduced was to develop and add a multi-dimensional filter into the original control system to make the control signals cleaner and more stable at real time. By going further, a second new idea was also proposed to predict new control information based on the combination of a new algorithm and prediction control theory. In order to fulfill the first idea properly, the possible methods to process data in real time, the different ways to produce Gaussian distributed random data, the way to combine the new algorithm with the previous complex program project, and the way to simplify and reduce the running time of the algorithm to maintain the high efficiency, the real time processing with multiple channels of the sensory system and the real-time performance of the control system were researched. Eventually, the experiment on the same provided robot system gives the results of the first idea and shows the improved performance of the filter comparing with the original control method.
Telerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays. As one of the best alternatives to human-level intelligence, Reinforcement Learning (RL) may offer a solution to cope with these issues. This paper proposes to integrate RL with the Model Mediated Teleoperation (MMT) concept. The teleoperator interacts with a simulated virtual environment, which provides instant feedback. Whereas feedback from the real environment is delayed, feedback from the model is instantaneous, leading to high transparency. The MMT is realized in combination with an intelligent system with two layers. The first layer utilizes Dynamic Movement Primitives (DMP) which accounts for certain changes in the avatar environment. And, the second layer addresses the problems caused by uncertainty in the model using RL methods. Augmented reality was also provided to fuse the avatar device and virtual environment models for the teleoperator. Implemented on DLRs Exodex Adam hand-arm haptic exoskeleton, the results show RL methods are able to find different solutions when changes are applied to the object position after the demonstration. The results also show DMPs to be effective at adapting to new conditions where there is no uncertainty involved.
Teleoperation platforms often require the user to be situated at a fixed location to both visualize and control the movement of the robot and thus do not provide the operator with much mobility. One example of such systems is in existing robotic surgery solutions that require the surgeons to be away from the patient, attached to consoles where their heads must be fixed and their arms can only move in a limited space. This creates a barrier between physicians and patients that does not exist in normal surgery. To address this issue, we propose a mobile telesurgery solution where the surgeons are no longer mechanically limited to control consoles and are able to teleoperate the robots from the patient bedside, using their arms equipped with wireless sensors and viewing the endoscope video via optical see-through HMDs. We evaluate the feasibility and efficiency of our user interaction method with a standard surgical robotic manipulator via two tasks with different levels of required dexterity. The results indicate that with sufficient training our proposed platform can attain similar efficiency while providing added mobility for the operator.
This work develops and demonstrates the integration of the SCAMP-5d vision system into the CoppeliaSim robot simulator, creating a semi-simulated environment. By configuring a camera in the simulator and setting up communication with the SCAMP python host through remote API, sensor images from the simulator can be transferred to the SCAMP vision sensor, where on-sensor image processing such as CNN inference can be performed. SCAMP output is then fed back into CoppeliaSim. This proposed platform integration enables rapid prototyping validations of SCAMP algorithms for robotic systems. We demonstrate a car localisation and tracking task using this proposed semi-simulated platform, with a CNN inference on SCAMP to command the motion of a robot. We made this platform available online.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا