No Arabic abstract
We present a system enabling a modular robot to autonomously build structures in order to accomplish high-level tasks. Building structures allows the robot to surmount large obstacles, expanding the set of tasks it can perform. This addresses a common weakness of modular robot systems, which often struggle to traverse large obstacles. This paper presents the hardware, perception, and planning tools that comprise our system. An environment characterization algorithm identifies features in the environment that can be augmented to create a path between two disconnected regions of the environment. Specially-designed building blocks enable the robot to create structures that can augment the environment to make obstacles traversable. A high-level planner reasons about the task, robot locomotion capabilities, and environment to decide if and where to augment the environment in order to perform the desired task. We validate our system in hardware experiments
The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage and remains a major motivator for work in the field. We present a modular robot system capable of autonomously completing high-level tasks by reactively reconfiguring to meet the needs of a perceived, a priori unknown environment. The system integrates perception, high-level planning, and modular hardware, and is validated in three hardware demonstrations. Given a high-level task specification, a modular robot autonomously explores an unknown environment, decides when and how to reconfigure, and manipulates objects to complete its task. The system architecture balances distributed mechanical elements with centralized perception, planning, and control. By providing an example of how a modular robot system can be designed to leverage reactive reconfigurability in unknown environments, we have begun to lay the groundwork for modular self-reconfigurable robots to address tasks in the real world.
Quadrupeds are strong candidates for navigating challenging environments because of their agile and dynamic designs. This paper presents a methodology that extends the range of exploration for quadrupedal robots by creating an end-to-end navigation framework that exploits walking and jumping modes. To obtain a dynamic jumping maneuver while avoiding obstacles, dynamically-feasible trajectories are optimized offline through collocation-based optimization where safety constraints are imposed. Such optimization schematic allows the robot to jump through window-shaped obstacles by considering both obstacles in the air and on the ground. The resulted jumping mode is utilized in an autonomous navigation pipeline that leverages a search-based global planner and a local planner to enable the robot to reach the goal location by walking. A state machine together with a decision making strategy allows the system to switch behaviors between walking around obstacles or jumping through them. The proposed framework is experimentally deployed and validated on a quadrupedal robot, a Mini Cheetah, to enable the robot to autonomously navigate through an environment while avoiding obstacles and jumping over a maximum height of 13 cm to pass through a window-shaped opening in order to reach its goal.
This paper addresses task-allocation problems with uncertainty in situational awareness for distributed autonomous robots (DARs). The uncertainty propagation over a task-allocation process is done by using the Unscented transform that uses the Sigma-Point sampling mechanism. It has great potential to be employed for generic task-allocation schemes, in the sense that there is no need to modify an existing task-allocation method that has been developed without considering the uncertainty in the situational awareness. The proposed framework was tested in a simulated environment where the decision-maker needs to determine an optimal allocation of multiple locations assigned to multiple mobile flying robots whose locations come as random variables of known mean and covariance. The simulation result shows that the proposed stochastic task allocation approach generates an assignment with 30% less overall cost than the one without considering the uncertainty.
Building structures can allow a robot to surmount large obstacles, expanding the set of areas it can reach. This paper presents a planning algorithm to automatically determine what structures a construction-capable robot must build in order to traverse its entire environment. Given an environment, a set of building blocks, and a robot capable of building structures, we seek a optimal set of structures (using a minimum number of building blocks) that could be built to make the entire environment traversable with respect to the robots movement capabilities. We show that this problem is NP-Hard, and present a complete, optimal algorithm that solves it using a branch-and-bound strategy. The algorithm runs in exponential time in the worst case, but solves typical problems with practical speed. In hardware experiments, we show that the algorithm solves 3D maps of real indoor environments in about one minute, and that the structures selected by the algorithm allow a robot to traverse the entire environment. An accompanying video is available online at https://youtu.be/B9WM557NP44.
Modular soft robots combine the strengths of two traditionally separate areas of robotics. As modular robots, they can show robustness to individual failure and reconfigurability; as soft robots, they can deform and undergo large shape changes in order to adapt to their environment, and have inherent human safety. However, for sensing and communication these robots also combine the challenges of both: they require solutions that are scalable (low cost and complexity) and efficient (low power) to enable collectives of large numbers of robots, and these solutions must also be able to interface with the high extension ratio elastic bodies of soft robots. In this work, we seek to address these challenges using acoustic signals produced by piezoelectric surface transducers that are cheap, simple, and low power, and that not only integrate with but also leverage the elastic robot skins for signal transmission. Importantly, to further increase scalability, the transducers exhibit multi-functionality made possible by a relatively flat frequency response across the audible and ultrasonic ranges. With minimal hardware, they enable directional contact-based communication, audible-range communication at a distance, and exteroceptive sensing. We demonstrate a subset of the decentralized collective behaviors these functions make possible with multi-robot hardware implementations. The use of acoustic waves in this domain is shown to provide distinct advantages over existing solutions.