No Arabic abstract
With the potential applications of capsule robots in medical endoscopy, accurate dynamic control of the capsule robot is becoming more and more important. In the scale of a capsule robot, the friction between capsule and the environment plays an essential role in the dynamic model, which is usually difficult to model beforehand. In the paper, a tethered capsule robot system driven by a robot manipulator is built, where a strong magnetic Halbach array is mounted on the robots end-effector to adjust the state of the capsule. To increase the control accuracy, the friction between capsule and the environment is learned with demonstrated trajectories. With the learned friction model, experimental results demonstrate an improvement of 5.6% in terms of tracking error.
The potential diagnostic applications of magnet-actuated capsules have been greatly increased in recent years. For most of these potential applications, accurate position control of the capsule have been highly demanding. However, the friction between the robot and the environment as well as the drag force from the tether play a significant role during the motion control of the capsule. Moreover, these forces especially the friction force are typically hard to model beforehand. In this paper, we first designed a magnet-actuated tethered capsule robot, where the driving magnet is mounted on the end of a robotic arm. Then, we proposed a learning-based approach to model the friction force between the capsule and the environment, with the goal of increasing the control accuracy of the whole system. Finally, several real robot experiments are demonstrated to showcase the effectiveness of our proposed approach.
Moving in dynamic pedestrian environments is one of the important requirements for autonomous mobile robots. We present a model-based reinforcement learning approach for robots to navigate through crowded environments. The navigation policy is trained with both real interaction data from multi-agent simulation and virtual data from a deep transition model that predicts the evolution of surrounding dynamics of mobile robots. The model takes laser scan sequence and robots own state as input and outputs steering control. The laser sequence is further transformed into stacked local obstacle maps disentangled from robots ego motion to separate the static and dynamic obstacles, simplifying the model training. We observe that our method can be trained with significantly less real interaction data in simulator but achieve similar level of success rate in social navigation task compared with other methods. Experiments were conducted in multiple social scenarios both in simulation and on real robots, the learned policy can guide the robots to the final targets successfully while avoiding pedestrians in a socially compliant manner. Code is available at https://github.com/YuxiangCui/model-based-social-navigation
Several model-based and model-free methods have been proposed for the robot trajectory learning task. Both approaches have their benefits and drawbacks. They can usually complement each other. Many research works are trying to integrate some model-based and model-free methods into one algorithm and perform well in simulators or quasi-static robot tasks. Difficulties still exist when algorithms are used in particular trajectory learning tasks. In this paper, we propose a robot trajectory learning framework for precise tasks with discontinuous dynamics and high speed. The trajectories learned from the human demonstration are optimized by DDP and PoWER successively. The framework is tested on the Kendama manipulation task, which can also be difficult for humans to achieve. The results show that our approach can plan the trajectories to successfully complete the task.
This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor. We begin by surveying deep learning applied to tactile robotics, focussing on optical tactile sensors, which help bridge from deep learning for vision to touch. We then show how deep learning can be used to train accurate pose models of 3D surfaces and edges that are insensitive to nuisance variables such as motion-dependent shear. This involves including representative motions as unlabelled perturbations of the training data and using Bayesian optimization of the network and training hyperparameters to find the most accurate models. Accurate estimation of pose from touch will enable robots to safely and precisely control their physical interactions, underlying a wide range of object exploration and manipulation tasks.
Today, physical Human-Robot Interaction (pHRI) is a very popular topic in the field of ground manipulation. At the same time, Aerial Physical Interaction (APhI) is also developing very fast. Nevertheless, pHRI with aerial vehicles has not been addressed so far. In this work, we present the study of one of the first systems in which a human is physically connected to an aerial vehicle by a cable. We want the robot to be able to pull the human toward a desired position (or along a path) only using forces as an indirect communication-channel. We propose an admittance-based approach that makes pHRI safe. A controller, inspired by the literature on flexible manipulators, computes the desired interaction forces that properly guide the human. The stability of the system is formally proved with a Lyapunov-based argument. The system is also shown to be passive, and thus robust to non-idealities like additional human forces, time-varying inputs, and other external disturbances. We also design a maneuver regulation policy to simplify the path following problem. The global method has been experimentally validated on a group of four subjects, showing a reliable and safe pHRI.