Do you want to publish a course? Click here

Robotic Sculpting with Collision-free Motion Planning in Voxel Space

95   0   0.0 ( 0 )
 Added by Abhinav Jain
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this paper, we explore the task of robot sculpting. We propose a search based planning algorithm to solve the problem of sculpting by material removal with a multi-axis manipulator. We generate collision free trajectories for a manipulator using best-first search in voxel space. We also show significant speedup of our algorithm by using octrees to decompose the voxel space. We demonstrate our algorithm on a multi-axis manipulator in simulation by sculpting Michelangelos Statue of David, evaluate certain metrics of our algorithm and discuss future goals for the project.

rate research

Read More

Robotic fiber positioner (RFP) arrays are becoming heavily adopted in wide field massively multiplexed spectroscopic survey instruments. RFP arrays decrease nightly operational overheads through rapid reconfiguration between fields and exposures. In comparison to similar instruments, SDSS-V has selected a very dense RFP packing scheme where any point in a field is typically accessible to three or more robots. This design provides flexibility in target assignment. However, the task of collision-less trajectory planning is especially challenging. We present two multi-agent distributed control strategies that are highly efficient and computationally inexpensive for determining collision-free paths for RFPs in heavily overlapping workspaces. We demonstrate that a reconfiguration path between two arbitrary robot configurations can be efficiently found if folded state, in which all robot arms are retracted and aligned in a lattice-like orientation, is inserted between the initial and final states. Although developed for SDSS-V, the approach we describe is generic and so applicable to a wide range of RFP designs and layouts. Robotic fiber positioner technology continues to advance rapidly, and in the near future ultra-densely packed RFP designs may be feasible. Our algorithms are especially capable in routing paths in very crowded environments, where we see efficient results even in regimes significantly more crowded than the SDSS-V RFP design.
Online generation of collision free trajectories is of prime importance for autonomous navigation. Dynamic environments, robot motion and sensing uncertainties adds further challenges to collision avoidance systems. This paper presents an approach for collision avoidance in dynamic environments, incorporating robot and obstacle state uncertainties. We derive a tight upper bound for collision probability between robot and obstacle and formulate it as a motion planning constraint which is solvable in real time. The proposed approach is tested in simulation considering mobile robots as well as quadrotors to demonstrate that successful collision avoidance is achieved in real time application. We also provide a comparison of our approach with several state-of-the-art methods.
We present a neural network collision checking heuristic, ClearanceNet, and a planning algorithm, CN-RRT. ClearanceNet learns to predict separation distance (minimum distance between robot and workspace) with respect to a workspace. CN-RRT then efficiently computes a motion plan by leveraging three key features of ClearanceNet. First, CN-RRT explores the space by expanding multiple nodes at the same time, processing batches of thousands of collision checks. Second, CN-RRT adaptively relaxes its clearance requirements for more difficult problems. Third, to repair errors, CN-RRT shifts its nodes in the direction of ClearanceNets gradient and repairs any residual errors with a traditional RRT, thus maintaining theoretical probabilistic completeness guarantees. In configuration spaces with up to 30 degrees of freedom, ClearanceNet achieves 845x speedup over traditional collision detection methods, while CN-RRT accelerates motion planning by up to 42% over a baseline and finds paths up to 36% more efficient. Experiments on an 11 degree of freedom robot in a cluttered environment confirm the methods feasibility on real robots.
We present an integrated Task-Motion Planning (TMP) framework for navigation in large-scale environments. Of late, TMP for manipulation has attracted significant interest resulting in a proliferation of different approaches. In contrast, TMP for navigation has received considerably less attention. Autonomous robots operating in real-world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. In knowledge-intensive domains, on the one hand, a robot has to reason at the highest-level, for example, the objects to procure, the regions to navigate to in order to acquire them; on the other hand, the feasibility of the respective navigation tasks have to be checked at the execution level. This presents a need for motion-planning-aware task planners. In this paper, we discuss a probabilistically complete approach that leverages this task-motion interaction for navigating in large knowledge-intensive domains, returning a plan that is optimal at the task-level. The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology is validated in simulation, in an office environment and its scalability is tested in the larger Willow Garage world. A reasonable comparison with a work that is closest to our approach is also provided. We also demonstrate the adaptability of our approach by considering a building floor navigation domain. Finally, we also discuss the limitations of our approach and put forward suggestions for improvements and future work.
We present an integrated Task-Motion Planning (TMP) framework for navigation in large-scale environment. Autonomous robots operating in real world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. In knowledge intensive domains, on the one hand, a robot has to reason at the highest-level, for example the regions to navigate to; on the other hand, the feasibility of the respective navigation tasks have to be checked at the execution level. This presents a need for motion-planning-aware task planners. We discuss a probabilistically complete approach that leverages this task-motion interaction for navigating in indoor domains, returning a plan that is optimal at the task-level. Furthermore, our framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology is validated with a simulated office environment in Gazebo. In addition, we discuss the limitations and provide suggestions for improvements and future work.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا