ﻻ يوجد ملخص باللغة العربية
Trajectory planning is a key piece in the algorithmic architecture of a robot. Trajectory planners typically use iterative optimization schemes for generating smooth trajectories that avoid collisions and are optimal for tracking given the robots physical specifications. Starting from an initial estimate, the planners iteratively refine the solution so as to satisfy the desired constraints. In this paper, we show that such iterative optimization based planners can be vulnerable to adversarial attacks that force the planner either to fail completely, or significantly increase the time required to find a solution. The key insight here is that an adversary in the environment can directly affect the optimization cost function of a planner. We demonstrate how the adversary can adjust its own state configurations to result in poorly conditioned eigenstructure of the objective leading to failures. We apply our method against two state of the art trajectory planners and demonstrate that an adversary can consistently exploit certain weaknesses of an iterative optimization scheme.
Decision-based attacks (DBA), wherein attackers perturb inputs to spoof learning algorithms by observing solely the output labels, are a type of severe adversarial attacks against Deep Neural Networks (DNNs) requiring minimal knowledge of attackers.
Sampling based probabilistic roadmap planners (PRM) have been successful in motion planning of robots with higher degrees of freedom, but may fail to capture the connectivity of the configuration space in scenarios with a critical narrow passage. In
Deep learning models are vulnerable to adversarial examples, which can fool a target classifier by imposing imperceptible perturbations onto natural examples. In this work, we consider the practical and challenging decision-based black-box adversaria
This paper describes Motion Planning Networks (MPNet), a computationally efficient, learning-based neural planner for solving motion planning problems. MPNet uses neural networks to learn general near-optimal heuristics for path planning in seen and
Being an emerging class of in-memory computing architecture, brain-inspired hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors (i.e., vectors with a dimensionality of thousands or even more) to represent feature