Do you want to publish a course? Click here

Control Interface for Hands-free Navigation of Standing Mobility Vehicles based on Upper-Body Natural Movements

159   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we propose and evaluate a novel human-machine interface (HMI) for controlling a standing mobility vehicle or person carrier robot, aiming for a hands-free control through upper-body natural postures derived from gaze tracking while walking. We target users with lower-body impairment with remaining upper-body motion capabilities. The developed HMI bases on a sensing array for capturing body postures; an intent recognition algorithm for continuous mapping of body motions to robot control space; and a personalizing system for multiple body sizes and shapes. We performed two user studies: first, an analysis of the required body muscles involved in navigating with the proposed control; and second, an assessment of the HMI compared with a standard joystick through quantitative and qualitative metrics in a narrow circuit task. We concluded that the main user control contribution comes from Rectus Abdominis and Erector Spinae muscle groups at different levels. Finally, the comparative study showed that a joystick still outperforms the proposed HMI in usability perceptions and controllability metrics, however, the smoothness of user control was similar in jerk and fluency. Moreover, users perceptions showed that hands-free control made it more anthropomorphic, animated, and even safer.



rate research

Read More

108 - Michael Bowman , Jiucai Zhang , 2020
Shared control in teleoperation for providing robot assistance to accomplish object manipulation, called telemanipulation, is a new promising yet challenging problem. This has unique challenges--on top of teleoperation challenges in general--due to difficulties of physical discrepancy between human hands and robot hands as well as the fine motion constraints to constitute task success. We present an intuitive shared-control strategy where the focus is on generating robotic grasp poses which are better suited for human perception of successful teleoperated object manipulation and feeling of being in control of the robot, rather than developing objective stable grasp configurations for task success or following the human motion. The former is achieved by understanding human intent and autonomously taking over control on that inference. The latter is achieved by considering human inputs as hard motion constraints which the robot must abide. An arbitration of these two enables a trade-off for the subsequent robot motion to balance accomplishing the inferred task and motion constraints imposed by the operator. The arbitration framework adapts to the level of physical discrepancy between the human and different robot structures, enabling the assistance to indicate and appear to intuitively follow the user. To understand how users perceive good arbitration in object telemanipulation, we have conducted a user study with a hands-free telemanipulation setup to analyze the effect of factors including task predictability, perceived following, and user preference. The hands-free telemanipulation scene is chosen as the validation platform due to its more urgent need of intuitive robotics assistance for task success.
We focus on the problem of planning the motion of a robot in a dynamic multiagent environment such as a pedestrian scene. Enabling the robot to navigate safely and in a socially compliant fashion in such scenes requires a representation that accounts for the unfolding multiagent dynamics. Existing approaches to this problem tend to employ microscopic models of motion prediction that reason about the individual behavior of other agents. While such models may achieve high tracking accuracy in trajectory prediction benchmarks, they often lack an understanding of the group structures unfolding in crowded scenes. Inspired by the Gestalt theory from psychology, we build a Model Predictive Control framework (G-MPC) that leverages group-based prediction for robot motion planning. We conduct an extensive simulation study involving a series of challenging navigation tasks in scenes extracted from two real-world pedestrian datasets. We illustrate that G-MPC enables a robot to achieve statistically significantly higher safety and lower number of group intrusions than a series of baselines featuring individual pedestrian motion prediction models. Finally, we show that G-MPC can handle noisy lidar-scan estimates without significant performance losses.
This manuscript presents control of a high-DOF fully actuated lower-limb exoskeleton for paraplegic individuals. The key novelty is the ability for the user to walk without the use of crutches or other external means of stabilization. We harness the power of modern optimization techniques and supervised machine learning to develop a smooth feedback control policy that provides robust velocity regulation and perturbation rejection. Preliminary evaluation of the stability and robustness of the proposed approach is demonstrated through the Gazebo simulation environment. In addition, preliminary experimental results with (complete) paraplegic individuals are included for the previous version of the controller.
Path planning and collision avoidance are challenging in complex and highly variable environments due to the limited horizon of events. In literature, there are multiple model- and learning-based approaches that require significant computational resources to be effectively deployed and they may have limited generality. We propose a planning algorithm based on a globally stable passive controller that can plan smooth trajectories using limited computational resources in challenging environmental conditions. The architecture combines the recently proposed fractal impedance controller with elastic bands and regions of finite time invariance. As the method is based on an impedance controller, it can also be used directly as a force/torque controller. We validated our method in simulation to analyse the ability of interactive navigation in challenging concave domains via the issuing of via-points, and its robustness to low bandwidth feedback. A swarm simulation using 11 agents validated the scalability of the proposed method. We have performed hardware experiments on a holonomic wheeled platform validating smoothness and robustness of interaction with dynamic agents (i.e., humans and robots). The computational complexity of the proposed local planner enables deployment with low-power micro-controllers lowering the energy consumption compared to other methods that rely upon numeric optimisation.
In tasks such as surveying or monitoring remote regions, an autonomous robot must move while transmitting data over a wireless network with unknown, position-dependent transmission rates. For such a robot, this paper considers the problem of transmitting a data buffer in minimum time, while possibly also navigating towards a goal position. Two approaches are proposed, each consisting of a machine-learning component that estimates the rate function from samples; and of an optimal-control component that moves the robot given the current rate function estimate. Simple obstacle avoidance is performed for the case without a goal position. In extensive simulations, these methods achieve competitive performance compared to known-rate and unknown-rate baselines. A real indoor experiment is provided in which a Parrot AR.Drone 2 successfully learns to transmit the buffer.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا