No Arabic abstract
On the base of the developed master-slave prosthetic hand-arm robot system, which is controlled mainly based on signals obtained from bending sensors fixed on the data glove, the first idea deduced was to develop and add a multi-dimensional filter into the original control system to make the control signals cleaner and more stable at real time. By going further, a second new idea was also proposed to predict new control information based on the combination of a new algorithm and prediction control theory. In order to fulfill the first idea properly, the possible methods to process data in real time, the different ways to produce Gaussian distributed random data, the way to combine the new algorithm with the previous complex program project, and the way to simplify and reduce the running time of the algorithm to maintain the high efficiency, the real time processing with multiple channels of the sensory system and the real-time performance of the control system were researched. Eventually, the experiment on the same provided robot system gives the results of the first idea and shows the improved performance of the filter comparing with the original control method.
In this paper, we present a multimodal mobile teleoperation system that consists of a novel vision-based hand pose regression network (Transteleop) and an IMU-based arm tracking method. Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process. A keypoint-based reconstruction loss explores the resemblance in appearance and anatomy between human and robotic hands and enriches the local features of reconstructed images. A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system. Network evaluation results on a test dataset and a variety of complex manipulation tasks that go beyond simple pick-and-place operations show the efficiency and stability of our multimodal teleoperation system.
Bypass sockets allow researchers to perform tests of prosthetic systems from the prosthetic users perspective. We designed a modular upper-limb bypass socket with 3D-printed components that can be easily modified for use with a variety of terminal devices. Our bypass socket preserves access to forearm musculature and the hand, which are necessary for surface electromyography and to provide substituted sensory feedback. Our bypass socket allows a sufficient range of motion to complete tasks in the frontal working area, as measured on non-amputee participants. We examined the performance of non-amputee participants using the bypass socket on the original and modified Box and Block Tests. Participants moved 11.3 +/- 2.7 and 11.7 +/- 2.4 blocks in the original and modified Box and Block Tests (mean +/- SD), respectively, within the range of reported scores using amputee participants. Range-of-motion for users wearing the bypass socket meets or exceeds most reported range-of-motion requirements for activities of daily living. The bypass socket was originally designed with a freely rotating wrist; we found that adding elastic resistance to user wrist rotation while wearing the bypass socket had no significant effect on motor decode performance. We have open-sourced the design files and an assembly manual for the bypass socket. We anticipate that the bypass socket will be a useful tool to evaluate and develop sensorized myoelectric prosthesis technology.
This paper describes a novel approach in human robot interaction driven by ergonomics. With a clear focus on optimising ergonomics, the approach proposed here continuously observes a human users posture and by invoking appropriate cooperative robot movements, the users posture is, whenever required, brought back to an ergonomic optimum. Effectively, the new protocol optimises the human-robot relative position and orientation as a function of human ergonomics. An RGB-D camera is used to calculate and monitor human joint angles in real-time and to determine the current ergonomics state. A total of 6 main causes of low ergonomic states are identified, leading to 6 universal robot responses to allow the human to return to an optimal ergonomics state. The algorithmic framework identifies these 6 causes and controls the cooperating robot to always adapt the environment (e.g. change the pose of the workpiece) in a way that is ergonomically most comfortable for the interacting user. Hence, human-robot interaction is continuously re-evaluated optimizing ergonomics states. The approach is validated through an experimental study, based on established ergonomic methods and their adaptation for real-time application. The study confirms improved ergonomics using the new approach.
Currently, mobile robots are developing rapidly and are finding numerous applications in industry. However, there remain a number of problems related to their practical use, such as the need for expensive hardware and their high power consumption levels. In this study, we propose a navigation system that is operable on a low-end computer with an RGB-D camera and a mobile robot platform for the operation of an integrated autonomous driving system. The proposed system does not require LiDARs or a GPU. Our raw depth image ground segmentation approach extracts a traversability map for the safe driving of low-body mobile robots. It is designed to guarantee real-time performance on a low-cost commercial single board computer with integrated SLAM, global path planning, and motion planning. Running sensor data processing and other autonomous driving functions simultaneously, our navigation method performs rapidly at a refresh rate of 18Hz for control command, whereas other systems have slower refresh rates. Our method outperforms current state-of-the-art navigation approaches as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in a residential lobby.
Robot table tennis systems require a vision system that can track the ball position with low latency and high sampling rate. Altering the ball to simplify the tracking using for instance infrared coating changes the physics of the ball trajectory. As a result, table tennis systems use custom tracking systems to track the ball based on heuristic algorithms respecting the real time constrains applied to RGB images captured with a set of cameras. However, these heuristic algorithms often report erroneous ball positions, and the table tennis policies typically need to incorporate additional heuristics to detect and possibly correct outliers. In this paper, we propose a vision system for object detection and tracking that focus on reliability while providing real time performance. Our assumption is that by using multiple cameras, we can find and discard the errors obtained in the object detection phase by checking for consistency with the positions reported by other cameras. We provide an open source implementation of the proposed tracking system to simplify future research in robot table tennis or related tracking applications with strong real time requirements. We evaluate the proposed system thoroughly in simulation and in the real system, outperforming previous work. Furthermore, we show that the accuracy and robustness of the proposed system increases as more cameras are added. Finally, we evaluate the table tennis playing performance of an existing method in the real robot using the proposed vision system. We measure a slight increase in performance compared to a previous vision system even after removing all the heuristics previously present to filter out erroneous ball observations.