No Arabic abstract
Individuals living with paralysis or amputation can operate robotic prostheses using input signals based on their intent or attempt to move. Because sensory function is lost or diminished in these individuals, haptic feedback must be non-collocated. The intracortical brain computer interface (iBCI) has enabled a variety of neural prostheses for people with paralysis. An important attribute of the iBCI is that its input signal contains signal-independent noise. To understand the effects of signal-independent noise on a system with non-collocated haptic feedback and inform iBCI-based prostheses control strategies, we conducted an experiment with a conventional haptic interface as a proxy for the iBCI. Able-bodied users were tasked with locating an indentation within a virtual environment using input from their right hand. Non-collocated haptic feedback of the interaction forces in the virtual environment was augmented with noise of three different magnitudes and simultaneously rendered on users left hands. We found increases in distance error of the guess of the indentation location, mean time per trial, mean peak absolute displacement and speed of tool movements during localization for the highest noise level compared to the other two levels. The findings suggest that users have a threshold of disturbance rejection and that they attempt to increase their signal-to-noise ratio through their exploratory actions.
In this paper, we present an impedance control design for multi-variable linear and nonlinear robotic systems. The control design considers force and state feedback to improve the performance of the closed loop. Simultaneous feedback of forces and states allows the controller for an extra degree of freedom to approximate the desired impedance port behaviour. A numerical analysis is used to demonstrate the desired impedance closed-loop behaviour.
Teleoperation platforms often require the user to be situated at a fixed location to both visualize and control the movement of the robot and thus do not provide the operator with much mobility. One example of such systems is in existing robotic surgery solutions that require the surgeons to be away from the patient, attached to consoles where their heads must be fixed and their arms can only move in a limited space. This creates a barrier between physicians and patients that does not exist in normal surgery. To address this issue, we propose a mobile telesurgery solution where the surgeons are no longer mechanically limited to control consoles and are able to teleoperate the robots from the patient bedside, using their arms equipped with wireless sensors and viewing the endoscope video via optical see-through HMDs. We evaluate the feasibility and efficiency of our user interaction method with a standard surgical robotic manipulator via two tasks with different levels of required dexterity. The results indicate that with sufficient training our proposed platform can attain similar efficiency while providing added mobility for the operator.
Recent advancements in textit{Learning from Human Feedback} present an effective way to train robot agents via inputs from non-expert humans, without a need for a specially designed reward function. However, this approach needs a human to be present and attentive during robot learning to provide evaluative feedback. In addition, the amount of feedback needed grows with the level of task difficulty and the quality of human feedback might decrease over time because of fatigue. To overcome these limitations and enable learning more robot tasks with higher complexities, there is a need to maximize the quality of expensive feedback received and reduce the amount of human cognitive involvement required. In this work, we present an approach that uses active learning to smartly choose queries for the human supervisor based on the uncertainty of the robot and effectively reduces the amount of feedback needed to learn a given task. We also use a novel multiple buffer system to improve robustness to feedback noise and guard against catastrophic forgetting as the robot learning evolves. This makes it possible to learn tasks with more complexity using lesser amounts of human feedback compared to previous methods. We demonstrate the utility of our proposed method on a robot arm reaching task where the robot learns to reach a location in 3D without colliding with obstacles. Our approach is able to learn this task faster, with less human feedback and cognitive involvement, compared to previous methods that do not use active learning.
In this paper we derive closed-form formulas of feedback capacity and nonfeedback achievable rates, for Additive Gaussian Noise (AGN) channels driven by nonstationary autoregressive moving average (ARMA) noise (with unstable one poles and zeros), based on time-invariant feedback codes and channel input distributions. From the analysis and simulations follows the surprising observations, (i) the use of time-invariant channel input distributions gives rise to multiple regimes of capacity that depend on the parameters of the ARMA noise, which may or may not use feedback, (ii) the more unstable the pole (resp. zero) of the ARMA noise the higher (resp. lower) the feedback capacity, (iii) certain conditions, known as detectability and stabilizability are necessary and sufficient to ensure the feedback capacity formulas and nonfeedback achievable rates {it are independent of the initial state of the ARMA noise}. Another surprizing observation is that Kims cite{kim2010} characterization of feedback capacity which is developed for stable ARMA noise, if applied to the unstable ARMA noise, gives a lower value of feedback capacity compared to our feedback capacity formula.
Telepresence robots offer presence, embodiment, and mobility to remote users, making them promising options for homebound K-12 students. It is difficult, however, for robot operators to know how well they are being heard in remote and noisy classroom environments. One solution is to estimate the operators speech intelligibility to their listeners in order to provide feedback about it to the operator. This work contributes the first evaluation of a speech intelligibility feedback system for homebound K-12 students attending class remotely. In our four long-term, in-the-wild deployments we found that students speak at different volumes instead of adjusting the robots volume, and that detailed audio calibration and network latency feedback are needed. We also contribute the first findings about the types and frequencies of multimodal comprehension cues given to homebound students by listeners in the classroom. By annotating and categorizing over 700 cues, we found that the most common cue modalities were conversation turn timing and verbal content. Conversation turn timing cues occurred more frequently overall, whereas verbal content cues contained more information and might be the most frequent modality for negative cues. Our work provides recommendations for telepresence systems that could intervene to ensure that remote users are being heard.