Do you want to publish a course? Click here

Sequential Decision Fusion for Environmental Classification in Assistive Walking

59   0   0.0 ( 0 )
 Added by Kuangen Zhang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Powered prostheses are effective for helping amputees walk on level ground, but these devices are inconvenient to use in complex environments. Prostheses need to understand the motion intent of amputees to help them walk in complex environments. Recently, researchers have found that they can use vision sensors to classify environments and predict the motion intent of amputees. Previous researchers can classify environments accurately in the offline analysis, but they neglect to decrease the corresponding time delay. To increase the accuracy and decrease the time delay of environmental classification, we propose a new decision fusion method in this paper. We fuse sequential decisions of environmental classification by constructing a hidden Markov model and designing a transition probability matrix. We evaluate our method by inviting able-bodied subjects and amputees to implement indoor and outdoor experiments. Experimental results indicate that our method can classify environments more accurately and with less time delay than previous methods. Besides classifying environments, the proposed decision fusion method may also optimize sequential predictions of the human motion intent in the future.

rate research

Read More

Generating provably stable walking gaits that yield natural locomotion when executed on robotic-assistive devices is a challenging task that often requires hand-tuning by domain experts. This paper presents an alternative methodology, where we propose the addition of musculoskeletal models directly into the gait generation process to intuitively shape the resulting behavior. In particular, we construct a multi-domain hybrid system model that combines the system dynamics with muscle models to represent natural multicontact walking. Stable walking gaits can then be formally generated for this model via the hybrid zero dynamics method. We experimentally apply our framework towards achieving multicontact locomotion on a dual-actuated transfemoral prosthesis, AMPRO3. The results demonstrate that enforcing feasible muscle dynamics produces gaits that yield natural locomotion (as analyzed via electromyography), without the need for extensive manual tuning. Moreover, these gaits yield similar behavior to expert-tuned gaits. We conclude that the novel approach of combining robotic walking methods (specifically HZD) with muscle models successfully generates anthropomorphic robotic-assisted locomotion.
It is challenging for humans -- particularly those living with physical disabilities -- to control high-dimensional, dexterous robots. Prior work explores learning embedding functions that map a humans low-dimensional inputs (e.g., via a joystick) to complex, high-dimensional robot actions for assistive teleoperation; however, a central problem is that there are many more high-dimensional actions than available low-dimensional inputs. To extract the correct action and maximally assist their human controller, robots must reason over their context: for example, pressing a joystick down when interacting with a coffee cup indicates a different action than when interacting with knife. In this work, we develop assistive robots that condition their latent embeddings on visual inputs. We explore a spectrum of visual encoders and show that incorporating object detectors pretrained on small amounts of cheap, easy-to-collect structured data enables i) accurately and robustly recognizing the current context and ii) generalizing control embeddings to new objects and tasks. In user studies with a high-dimensional physical robot arm, participants leverage this approach to perform new tasks with unseen objects. Our results indicate that structured visual representations improve few-shot performance and are subjectively preferred by users.
Stable bipedal walking is a key prerequisite for humanoid robots to reach their potential of being versatile helpers in our everyday environments. Bipedal walking is, however, a complex motion that requires the coordination of many degrees of freedom while it is also inherently unstable and sensitive to disturbances. The balance of a walking biped has to be constantly maintained. The most effective way of controlling balance are well timed and placed recovery steps -- capture steps -- that absorb the expense momentum gained from a push or a stumble. We present a bipedal gait generation framework that utilizes step timing and foot placement techniques in order to recover the balance of a biped even after strong disturbances. Our framework modifies the next footstep location instantly when responding to a disturbance and generates controllable omnidirectional walking using only very little sensing and computational power. We exploit the open-loop stability of a central pattern generated gait to fit a linear inverted pendulum model to the observed center of mass trajectory. Then, we use the fitted model to predict suitable footstep locations and timings in order to maintain balance while following a target walking velocity. Our experiments show qualitative and statistical evidence of one of the strongest push-recovery capabilities among humanoid robots to date.
Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Todays robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robots motion in the x-y plane, in another mode the joystick controls the robots z-yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robots high-dimensional actions into low-dimensional and human-controllable latent actions. We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.
This work proposes an autonomous docking control for nonholonomic constrained mobile robots and applies it to an intelligent mobility device or wheelchair for assisting the user in approaching resting furniture such as a chair or a bed. We defined a virtual landmark inferred from the target docking destination. Then, we solve the problem of keeping the targeted volume inside the field of view (FOV) of a tracking camera and docking to the virtual landmark through a novel definition that enables to control for the desired end-pose. In this article, we proposed a nonlinear feedback controller to perform the docking with the depth cameras FOV as a constraint. Then, a numerical method is proposed to find the feasible space of initial states where convergence could be guaranteed. Finally, the entire system was embedded for real-time operation on a standing wheelchair with the virtual landmark estimation by 3D object tracking with an RGB-D camera and we validated the effectiveness in simulation and experimental evaluations. The results show the guaranteed convergence for the feasible space depending on the virtual landmark location. In the implementation, the robot converges to the virtual landmark while respecting the FOV constraints.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا