No Arabic abstract
We present an intelligent interactive nightstand mounted on a mobile robot, to aid the elderly in their homes using physical, tactile and visual percepts. We show the integration of three different sensing modalities for controlling the navigation of a robot mounted nightstand within the constrained environment of a general purpose living room housing a single aging individual in need of assistance and monitoring. A camera mounted on the ceiling of the room, gives a top-down view of the obstacles, the person and the nightstand. Pressure sensors mounted beneath the bed-stand of the individual provide physical perception of the persons state. A proximity IR sensor on the nightstand acts as a tactile interface along with a Wii Nunchuck (Nintendo) to control mundane operations on the nightstand. Intelligence from these three modalities are combined to enable path planning for the nightstand to approach the individual. With growing emphasis on assistive technology for the aging individuals who are increasingly electing to stay in their homes, we show how ubiquitous intelligence can be brought inside homes to help monitor and provide care to an individual. Our approach goes one step towards achieving pervasive intelligence by seamlessly integrating different sensors embedded in the fabric of the environment.
In this paper, our focus is on certain applications for mobile robotic networks, where reconfiguration is driven by factors intrinsic to the network rather than changes in the external environment. In particular, we study a version of the coverage problem useful for surveillance applications, where the objective is to position the robots in order to minimize the average distance from a random point in a given environment to the closest robot. This problem has been well-studied for omni-directional robots and it is shown that optimal configuration for the network is a centroidal Voronoi configuration and that the coverage cost belongs to $Theta(m^{-1/2})$, where $m$ is the number of robots in the network. In this paper, we study this problem for more realistic models of robots, namely the double integrator (DI) model and the differential drive (DD) model. We observe that the introduction of these motion constraints in the algorithm design problem gives rise to an interesting behavior. For a emph{sparser} network, the optimal algorithm for these models of robots mimics that for omni-directional robots. We propose novel algorithms whose performances are within a constant factor of the optimal asymptotically (i.e., as $m to +infty$). In particular, we prove that the coverage cost for the DI and DD models of robots is of order $m^{-1/3}$. Additionally, we show that, as the network grows, these novel algorithms outperform the conventional algorithm; hence necessitating a reconfiguration in the network in order to maintain optimal quality of service.
In this work, we report on the integrated sensorimotor control of the Pisa/IIT SoftHand, an anthropomorphic soft robot hand designed around the principle of adaptive synergies, with the BRL tactile fingertip (TacTip), a soft biomimetic optical tactile sensor based on the human sense of touch. Our focus is how a sense of touch can be used to control an anthropomorphic hand with one degree of actuation, based on an integration that respects the hands mechanical functionality. We consider: (i) closed-loop tactile control to establish a light contact on an unknown held object, based on the structural similarity with an undeformed tactile image; and (ii) controlling the estimated pose of an edge feature of a held object, using a convolutional neural network approach developed for controlling other sensors in the TacTip family. Overall, this gives a foundation to endow soft robotic hands with human-like touch, with implications for autonomous grasping, manipulation, human-robot interaction and prosthetics. Supplemental video: https://youtu.be/ndsxj659bkQ
This paper presents the concept of an In situ Fabricator, a mobile robot intended for on-site manufacturing, assembly and digital fabrication. We present an overview of a prototype system, its capabilities, and highlight the importance of high-performance control, estimation and planning algorithms for achieving desired construction goals. Next, we detail on two architectural application scenarios: first, building a full-size undulating brick wall, which required a number of repositioning and autonomous localisation manoeuvres. Second, the Mesh Mould concrete process, which shows that an In situ Fabricator in combination with an innovative digital fabrication tool can be used to enable completely novel building technologies. Subsequently, important limitations and disadvantages of our approach are discussed. Based on that, we identify the need for a new type of robotic actuator, which facilitates the design of novel full-scale construction robots. We provide brief insight into the development of this actuator and conclude the paper with an outlook on the next-generation In situ Fabricator, which is currently under development.
Koopman operator theory has served as the basis to extract dynamics for nonlinear system modeling and control across settings, including non-holonomic mobile robot control. There is a growing interest in research to derive robustness (and/or safety) guarantees for systems the dynamics of which are extracted via the Koopman operator. In this paper, we propose a way to quantify the prediction error because of noisy measurements when the Koopman operator is approximated via Extended Dynamic Mode Decomposition. We further develop an enhanced robot control strategy to endow robustness to a class of data-driven (robotic) systems that rely on Koopman operator theory, and we show how part of the strategy can happen offline in an effort to make our algorithm capable of real-time implementation. We perform a parametric study to evaluate the (theoretical) performance of the algorithm using a Van der Pol oscillator and conduct a series of simulated experiments in Gazebo using a non-holonomic wheeled robot.
A key component of many robotics model-based planning and control algorithms is physics predictions, that is, forecasting a sequence of states given an initial state and a sequence of controls. This process is slow and a major computational bottleneck for robotics planning algorithms. Parallel-in-time integration methods can help to leverage parallel computing to accelerate physics predictions and thus planning. The Parareal algorithm iterates between a coarse serial integrator and a fine parallel integrator. A key challenge is to devise a coarse model that is computationally cheap but accurate enough for Parareal to converge quickly. Here, we investigate the use of a deep neural network physics model as a coarse model for Parareal in the context of robotic manipulation. In simulated experiments using the physics engine Mujoco as fine propagator we show that the learned coarse model leads to faster Parareal convergence than a coarse physics-based model. We further show that the learned coarse model allows to apply Parareal to scenarios with multiple objects, where the physics-based coarse model is not applicable. Finally, we conduct experiments on a real robot and show that Parareal predictions are close to real-world physics predictions for robotic pushing of multiple objects. Videos are at https://youtu.be/wCh2o1rf-gA.