Do you want to publish a course? Click here

Human-centered Control of a Growing Soft Robot for Object Manipulation

78   0   0.0 ( 0 )
 Added by Fabio Stroppa
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We present a user-friendly interface to teleoperate a soft robot manipulator in a complex environment. Key components of the system include a manipulator with a grasping end-effector that grows via tip eversion, gesture-based control, and haptic display to the operator for feedback and guidance. In the initial work, the operator uses the soft robot to build a tower of blocks, and future works will extend this to shared autonomy scenarios in which the human operator and robot intelligence are both necessary for task completion.



rate research

Read More

Soft growing robots are proposed for use in applications such as complex manipulation tasks or navigation in disaster scenarios. Safe interaction and ease of production promote the usage of this technology, but soft robots can be challenging to teleoperate due to their unique degrees of freedom. In this paper, we propose a human-centered interface that allows users to teleoperate a soft growing robot for manipulation tasks using arm movements. A study was conducted to assess the intuitiveness of the interface and the performance of our soft robot, involving a pick-and-place manipulation task. The results show that users completed the task with a success rate of 97%, achieving placement errors below 2 cm on average. These results demonstrate that our body-movement-based interface is an effective method for control of a soft growing robot manipulator.
Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.
This paper presents a user-centered physical interface for collaborative mobile manipulators in industrial manufacturing and logistics applications. The proposed work builds on our earlier MOCA-MAN interface, through which a mobile manipulator could be physically coupled to the operators to assist them in performing daily activities. The new interface instead presents the following additions: i) A simplistic, industrial-like design that allows the worker to couple/decouple easily and to operate mobile manipulators locally; ii) Enhanced loco-manipulation capabilities that do not compromise the worker mobility. Besides, an experimental evaluation with six human subjects is carried out to analyze the enhanced locomotion and flexibility of the proposed interface in terms of mobility constraint, usability, and physical load reduction.
The increasing presence of robots alongside humans, such as in human-robot teams in manufacturing, gives rise to research questions about the kind of behaviors people prefer in their robot counterparts. We term actions that support interaction by reducing future interference with others as supportive robot actions and investigate their utility in a co-located manipulation scenario. We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones when they reduce future goal-conflicts. Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task. We implemented these modes on a physical robot in a user study where a human and a robot perform object placement on a shared table. Our results show that a supportive robot was perceived as a more favorable coworker by the human and also reduced interference with the human in the more difficult of two scenarios. However, it also took longer to complete the task highlighting an interesting trade-off between task-efficiency and human-preference that needs to be considered before designing robot behavior for close-proximity manipulation scenarios.
Imitating human demonstrations is a promising approach to endow robots with various manipulation capabilities. While recent advances have been made in imitation learning and batch (offline) reinforcement learning, a lack of open-source human datasets and reproducible learning methods make assessing the state of the field difficult. In this paper, we conduct an extensive study of six offline learning algorithms for robot manipulation on five simulated and three real-world multi-stage manipulation tasks of varying complexity, and with datasets of varying quality. Our study analyzes the most critical challenges when learning from offline human data for manipulation. Based on the study, we derive a series of lessons including the sensitivity to different algorithmic design choices, the dependence on the quality of the demonstrations, and the variability based on the stopping criteria due to the different objectives in training and evaluation. We also highlight opportunities for learning from human datasets, such as the ability to learn proficient policies on challenging, multi-stage tasks beyond the scope of current reinforcement learning methods, and the ability to easily scale to natural, real-world manipulation scenarios where only raw sensory signals are available. We have open-sourced our datasets and all algorithm implementations to facilitate future research and fair comparisons in learning from human demonstration data. Codebase, datasets, trained models, and more available at https://arise-initiative.github.io/robomimic-web/
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا