ترغب بنشر مسار تعليمي؟ اضغط هنا

Shared-Control Teleoperation Paradigms on a Soft Growing Robot Manipulator

127   0   0.0 ( 0 )
 نشر من قبل Fabio Stroppa
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.



قيم البحث

اقرأ أيضاً

We present a user-friendly interface to teleoperate a soft robot manipulator in a complex environment. Key components of the system include a manipulator with a grasping end-effector that grows via tip eversion, gesture-based control, and haptic disp lay to the operator for feedback and guidance. In the initial work, the operator uses the soft robot to build a tower of blocks, and future works will extend this to shared autonomy scenarios in which the human operator and robot intelligence are both necessary for task completion.
Attaching a robotic manipulator to a flying base allows for significant improvements in the reachability and versatility of manipulation tasks. In order to explore such systems while taking advantage of human capabilities in terms of perception and c ognition, bilateral teleoperation arises as a reasonable solution. However, since most telemanipulation tasks require visual feedback in addition to the haptic one, real-time (task-dependent) positioning of a video camera, which is usually attached to the flying base, becomes an additional objective to be fulfilled. Since the flying base is part of the kinematic structure of the robot, if proper care is not taken, moving the video camera could undesirably disturb the end-effector motion. For that reason, the necessity of controlling the base position in the null space of the manipulation task arises. In order to provide the operator with meaningful information about the limits of the allowed motions in the null space, this paper presents a novel haptic concept called Null-Space Wall. In addition, a framework to allow stable bilateral teleoperation of both tasks is presented. Numerical simulation data confirm that the proposed framework is able to keep the system passive while allowing the operator to perform time-delayed telemanipulation and command the base to a task-dependent optimal pose.
Soft growing robots are proposed for use in applications such as complex manipulation tasks or navigation in disaster scenarios. Safe interaction and ease of production promote the usage of this technology, but soft robots can be challenging to teleo perate due to their unique degrees of freedom. In this paper, we propose a human-centered interface that allows users to teleoperate a soft growing robot for manipulation tasks using arm movements. A study was conducted to assess the intuitiveness of the interface and the performance of our soft robot, involving a pick-and-place manipulation task. The results show that users completed the task with a success rate of 97%, achieving placement errors below 2 cm on average. These results demonstrate that our body-movement-based interface is an effective method for control of a soft growing robot manipulator.
Remote teleoperation of robots can broaden the reach of domain specialists across a wide range of industries such as home maintenance, health care, light manufacturing, and construction. However, current direct control methods are impractical, and ex isting tools for programming robot remotely have focused on users with significant robotic experience. Extending robot remote programming to end users, i.e., users who are experts in a domain but novices in robotics, requires tools that balance the rich features necessary for complex teleoperation tasks with ease of use. The primary challenge to usability is that novice users are unable to specify complete and robust task plans to allow a robot to perform duties autonomously, particularly in highly variable environments. Our solution is to allow operators to specify shorter sequences of high-level commands, which we call task-level authoring, to create periods of variable robot autonomy. This approach allows inexperienced users to create robot behaviors in uncertain environments by interleaving exploration, specification of behaviors, and execution as separate steps. End users are able to break down the specification of tasks and adapt to the current needs of the interaction and environments, combining the reactivity of direct control to asynchronous operation. In this paper, we describe a prototype system contextualized in light manufacturing and its empirical validation in a user study where 18 participants with some programming experience were able to perform a variety of complex telemanipulation tasks with little training. Our results show that our approach allowed users to create flexible periods of autonomy and solve rich manipulation tasks. Furthermore, participants significantly preferred our system over comparative more direct interfaces, demonstrating the potential of our approach.
Model Predictive Control (MPC) has shown the great performance of target optimization and constraint satisfaction. However, the heavy computation of the Optimal Control Problem (OCP) at each triggering instant brings the serious delay from state samp ling to the control signals, which limits the applications of MPC in resource-limited robot manipulator systems over complicated tasks. In this paper, we propose a novel robust tube-based smooth-MPC strategy for nonlinear robot manipulator planning systems with disturbances and constraints. Based on piecewise linearization and state prediction, our control strategy improves the smoothness and optimizes the delay of the control process. By deducing the deviation of the real system states and the nominal system states, we can predict the next real state set at the current instant. And by using this state set as the initial condition, we can solve the next OCP ahead and store the optimal controls based on the nominal system states, which eliminates the delay. Furthermore, we linearize the nonlinear system with a given upper bound of error, reducing the complexity of the OCP and improving the response speed. Based on the theoretical framework of tube MPC, we prove that the control strategy is recursively feasible and closed-loop stable with the constraints and disturbances. Numerical simulations have verified the efficacy of the designed approach compared with the conventional MPC.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا