Do you want to publish a course? Click here

Does spontaneous motion lead to intuitive Body-Machine Interfaces? A fitness study of different body segments for wearable telerobotics

73   0   0.0 ( 0 )
 Added by Matteo Macchini
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Human-Robot Interfaces (HRIs) represent a crucial component in telerobotic systems. Body-Machine Interfaces (BoMIs) based on body motion can feel more intuitive than standard HRIs for naive users as they leverage humans natural control capability over their movements. Among the different methods used to map human gestures into robot commands, data-driven approaches select a set of body segments and transform their motion into commands for the robot based on the users spontaneous motion patterns. Despite being a versatile and generic method, there is no scientific evidence that implementing an interface based on spontaneous motion maximizes its effectiveness. In this study, we compare a set of BoMIs based on different body segments to investigate this aspect. We evaluate the interfaces in a teleoperation task of a fixed-wing drone and observe users performance and feedback. To this aim, we use a framework that allows a user to control the drone with a single Inertial Measurement Unit (IMU) and without prior instructions. We show through a user study that selecting the body segment for a BoMI based on spontaneous motion can lead to sub-optimal performance. Based on our findings, we suggest additional metrics based on biomechanical and behavioral factors that might improve data-driven methods for the design of HRIs.



rate research

Read More

The operation of telerobotic systems can be a challenging task, requiring intuitive and efficient interfaces to enable inexperienced users to attain a high level of proficiency. Body-Machine Interfaces (BoMI) represent a promising alternative to standard control devices, such as joysticks, because they leverage intuitive body motion and gestures. It has been shown that the use of Virtual Reality (VR) and first-person view perspectives can increase the users sense of presence in avatars. However, it is unclear if these beneficial effects occur also in the teleoperation of non-anthropomorphic robots that display motion patterns different from those of humans. Here we describe experimental results on teleoperation of a non-anthropomorphic drone showing that VR correlates with a higher sense of spatial presence, whereas viewpoints moving coherently with the robot are associated with a higher sense of embodiment. Furthermore, the experimental results show that spontaneous body motion patterns are affected by VR and viewpoint conditions in terms of variability, amplitude, and robot correlates, suggesting that the design of BoMIs for drone teleoperation must take into account the use of Virtual Reality and the choice of the viewpoint.
Teleoperation platforms often require the user to be situated at a fixed location to both visualize and control the movement of the robot and thus do not provide the operator with much mobility. One example of such systems is in existing robotic surgery solutions that require the surgeons to be away from the patient, attached to consoles where their heads must be fixed and their arms can only move in a limited space. This creates a barrier between physicians and patients that does not exist in normal surgery. To address this issue, we propose a mobile telesurgery solution where the surgeons are no longer mechanically limited to control consoles and are able to teleoperate the robots from the patient bedside, using their arms equipped with wireless sensors and viewing the endoscope video via optical see-through HMDs. We evaluate the feasibility and efficiency of our user interaction method with a standard surgical robotic manipulator via two tasks with different levels of required dexterity. The results indicate that with sufficient training our proposed platform can attain similar efficiency while providing added mobility for the operator.
In this paper we present a new approach for dynamic motion planning for legged robots. We formulate a trajectory optimization problem based on a compact form of the robot dynamics. Such a form is obtained by projecting the rigid body dynamics onto the null space of the Constraint Jacobian. As consequence of the projection, contact forces are removed from the model but their effects are still taken into account. This approach permits to solve the optimal control problem of a floating base constrained multibody system while avoiding the use of an explicit contact model. We use direct transcription to numerically solve the optimization. As the contact forces are not part of the decision variables the size of the resultant discrete mathematical program is reduced and therefore solutions can be obtained in a tractable time. Using a predefined sequence of contact configurations (phases), our approach solves motions where contact switches occur. Transitions between phases are automatically resolved without using a model for switching dynamics. We present results on a hydraulic quadruped robot (HyQ), including single phase (standing, crouching) as well as multiple phase (rearing, diagonal leg balancing and stepping) dynamic motions.
We introduce a robust control architecture for the whole-body motion control of torque controlled robots with arms and legs. The method is based on the robust control of contact forces in order to track a planned Center of Mass trajectory. Its appeal lies in the ability to guarantee robust stability and performance despite rigid body model mismatch, actuator dynamics, delays, contact surface stiffness, and unobserved ground profiles. Furthermore, we introduce a task space decomposition approach which removes the coupling effects between contact force controller and the other non-contact controllers. Finally, we verify our control performance on a quadruped robot and compare its performance to a standard inverse dynamics approach on hardware.
In this paper, we propose and evaluate a novel human-machine interface (HMI) for controlling a standing mobility vehicle or person carrier robot, aiming for a hands-free control through upper-body natural postures derived from gaze tracking while walking. We target users with lower-body impairment with remaining upper-body motion capabilities. The developed HMI bases on a sensing array for capturing body postures; an intent recognition algorithm for continuous mapping of body motions to robot control space; and a personalizing system for multiple body sizes and shapes. We performed two user studies: first, an analysis of the required body muscles involved in navigating with the proposed control; and second, an assessment of the HMI compared with a standard joystick through quantitative and qualitative metrics in a narrow circuit task. We concluded that the main user control contribution comes from Rectus Abdominis and Erector Spinae muscle groups at different levels. Finally, the comparative study showed that a joystick still outperforms the proposed HMI in usability perceptions and controllability metrics, however, the smoothness of user control was similar in jerk and fluency. Moreover, users perceptions showed that hands-free control made it more anthropomorphic, animated, and even safer.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا