Do you want to publish a course? Click here

Online User Assessment for Minimal Intervention During Task-Based Robotic Assistance

269   0   0.0 ( 0 )
 Added by Todd Murphey
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We propose a novel criterion for evaluating user input for human-robot interfaces for known tasks. We use the mode insertion gradient (MIG)---a tool from hybrid control theory---as a filtering criterion that instantaneously assesses the impact of user actions on a dynamic system over a time window into the future. As a result, the filter is permissive to many chosen strategies, minimally engaging, and skill-sensitive---qualities desired when evaluating human actions. Through a human study with 28 healthy volunteers, we show that the criterion exhibits a low, but significant, negative correlation between skill level, as estimated from task-specific measures in unassisted trials, and the rate of controller intervention during assistance. Moreover, a MIG-based filter can be utilized to create a shared control scheme for training or assistance. In the human study, we observe a substantial training effect when using a MIG-based filter to perform cart-pendulum inversion, particularly when comparing improvement via the RMS error measure. Using simulation of a controlled spring-loaded inverted pendulum (SLIP) as a test case, we observe that the MIG criterion could be used for assistance to guarantee either task completion or safety of a joint human-robot system, while maintaining the systems flexibility with respect to user-chosen strategies.



rate research

Read More

Shared autonomy enables robots to infer user intent and assist in accomplishing it. But when the user wants to do a new task that the robot does not know about, shared autonomy will hinder their performance by attempting to assist them with something that is not their intent. Our key idea is that the robot can detect when its repertoire of intents is insufficient to explain the users input, and give them back control. This then enables the robot to observe unhindered task execution, learn the new intent behind it, and add it to this repertoire. We demonstrate with both a case study and a user study that our proposed method maintains good performance when the humans intent is in the robots repertoire, outperforms prior shared autonomy approaches when it isnt, and successfully learns new skills, enabling efficient lifelong learning for confidence-based shared autonomy.
Despite the fact that robotic platforms can provide both consistent practice and objective assessments of users over the course of their training, there are relatively few instances where physical human robot interaction has been significantly more effective than unassisted practice or human-mediated training. This paper describes a hybrid shared control robot, which enhances task learning through kinesthetic feedback. The assistance assesses user actions using a task-specific evaluation criterion and selectively accepts or rejects them at each time instant. Through two human subject studies (total n=68), we show that this hybrid approach of switching between full transparency and full rejection of user inputs leads to increased skill acquisition and short-term retention compared to unassisted practice. Moreover, we show that the shared control paradigm exhibits features previously shown to promote successful training. It avoids user passivity by only rejecting user actions and allowing failure at the task. It improves performance during assistance, providing meaningful task-specific feedback. It is sensitive to initial skill of the user and behaves as an `assist-as-needed control scheme---adapting its engagement in real time based on the performance and needs of the user. Unlike other successful algorithms, it does not require explicit modulation of the level of impedance or error amplification during training and it is permissive to a range of strategies because of its evaluation criterion. We demonstrate that the proposed hybrid shared control paradigm with a task-based minimal intervention criterion significantly enhances task-specific training.
Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, todays representations for sensory data are mostly designed for human, not robotic, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception models ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.
Supernumerary Robotics Device (SRD) is an ideal solution to provide robotic assistance in overhead manual manipulation. Since two arms are occupied for the overhead task, it is desired to have additional arms to assist us in achieving other subtasks such as supporting the far end of a long plate and pushing it upward to fit in the ceiling. In this study, a method that maps human muscle force to SRD for overhead task assistance is proposed. Our methodology is to utilize redundant DoFs such as the idle muscles in the leg to control the supporting force of the SRD. A sEMG device is worn on the operators shank where muscle signals are measured, parsed, and transmitted to SRD for control. In the control aspect, we adopted stiffness control in the task space based on torque control at the joint level. We are motivated by the fact that humans can achieve daily manipulation merely through simple inherent compliance property in joint driven by muscles. We explore to estimate the force of some particular muscles in humans and control the SRD to imitate the behaviors of muscle and output supporting forces to accomplish the subtasks such as overhead supporting. The sEMG signals detected from human muscles are extracted, filtered, rectified, and parsed to estimate the muscle force. We use this force information as the intent of the operator for proper overhead supporting force. As one of the well-known compliance control methods, stiffness control is easy to achieve using a few of straightforward parameters such as stiffness and equilibrium point. Through tuning the stiffness and equilibrium point, the supporting force of SRD in task space can be easily controlled. The muscle force estimated by sEMG is mapped to the desired force in the task space of the SRD. The desired force is transferred into stiffness or equilibrium point to output the corresponding supporting force.
This paper presents a user-centered physical interface for collaborative mobile manipulators in industrial manufacturing and logistics applications. The proposed work builds on our earlier MOCA-MAN interface, through which a mobile manipulator could be physically coupled to the operators to assist them in performing daily activities. The new interface instead presents the following additions: i) A simplistic, industrial-like design that allows the worker to couple/decouple easily and to operate mobile manipulators locally; ii) Enhanced loco-manipulation capabilities that do not compromise the worker mobility. Besides, an experimental evaluation with six human subjects is carried out to analyze the enhanced locomotion and flexibility of the proposed interface in terms of mobility constraint, usability, and physical load reduction.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا