ترغب بنشر مسار تعليمي؟ اضغط هنا

Act, Perceive, and Plan in Belief Space for Robot Localization

275   0   0.0 ( 0 )
 نشر من قبل Michele Colledanchise
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we outline an interleaved acting and planning technique to rapidly reduce the uncertainty of the estimated robots pose by perceiving relevant information from the environment, as recognizing an object or asking someone for a direction. Generally, existing localization approaches rely on low-level geometric features such as points, lines, and planes, while these approaches provide the desired accuracy, they may require time to converge, especially with incorrect initial guesses. In our approach, a task planner computes a sequence of action and perception tasks to actively obtain relevant information from the robots perception system. We validate our approach in large state spaces, to show how the approach scales, and in real environments, to show the applicability of our method on real robots. We prove that our approach is sound, probabilistically complete, and tractable in practical cases.



قيم البحث

اقرأ أيضاً

Autonomous robots operating in large knowledgeintensive domains require planning in the discrete (task) space and the continuous (motion) space. In knowledge-intensive domains, on the one hand, robots have to reason at the highestlevel, for example t he regions to navigate to or objects to be picked up and their properties; on the other hand, the feasibility of the respective navigation tasks have to be checked at the controller execution level. Moreover, employing multiple robots offer enhanced performance capabilities over a single robot performing the same task. To this end, we present an integrated multi-robot task-motion planning framework for navigation in knowledge-intensive domains. In particular, we consider a distributed multi-robot setting incorporating mutual observations between the robots. The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology and its limitations are discussed, providing suggestions for improvements and future work. We validate key aspects of our approach in simulation.
High-definition maps (HD maps) are a key component of most modern self-driving systems due to their valuable semantic and geometric information. Unfortunately, building HD maps has proven hard to scale due to their cost as well as the requirements th ey impose in the localization system that has to work everywhere with centimeter-level accuracy. Being able to drive without an HD map would be very beneficial to scale self-driving solutions as well as to increase the failure tolerance of existing ones (e.g., if localization fails or the map is not up-to-date). Towards this goal, we propose MP3, an end-to-end approach to mapless driving where the input is raw sensor data and a high-level command (e.g., turn left at the intersection). MP3 predicts intermediate representations in the form of an online map and the current and future state of dynamic agents, and exploits them in a novel neural motion planner to make interpretable decisions taking into account uncertainty. We show that our approach is significantly safer, more comfortable, and can follow commands better than the baselines in challenging long-term closed-loop simulations, as well as when compared to an expert driver in a large-scale real-world dataset.
In this paper we propose a novel end-to-end learnable network that performs joint perception, prediction and motion planning for self-driving vehicles and produces interpretable intermediate representations. Unlike existing neural motion planners, ou r motion planning costs are consistent with our perception and prediction estimates. This is achieved by a novel differentiable semantic occupancy representation that is explicitly used as cost by the motion planning process. Our network is learned end-to-end from human demonstrations. The experiments in a large-scale manual-driving dataset and closed-loop simulation show that the proposed model significantly outperforms state-of-the-art planners in imitating the human behaviors while producing much safer trajectories.
We investigate the problem of autonomous object classification and semantic SLAM, which in general exhibits a tight coupling between classification, metric SLAM and planning under uncertainty. We contribute a unified framework for inference and belie f space planning (BSP) that addresses prominent sources of uncertainty in this context: classification aliasing (classier cannot distinguish between candidate classes from certain viewpoints), classifier epistemic uncertainty (classifier receives data far from its training set), and localization uncertainty (camera and object poses are uncertain). Specifically, we develop two methods for maintaining a joint distribution over robot and object poses, and over posterior class probability vector that considers epistemic uncertainty in a Bayesian fashion. The first approach is Multi-Hybrid (MH), where multiple hybrid beliefs over poses and classes are maintained to approximate the joint belief over poses and posterior class probability. The second approach is Joint Lambda Pose (JLP), where the joint belief is maintained directly using a novel JLP factor. Furthermore, we extend both methods to BSP, planning while reasoning about future posterior epistemic uncertainty indirectly, or directly via a novel information-theoretic reward function. Both inference methods utilize a novel viewpoint-dependent classifier uncertainty model that leverages the coupling between poses and classification scores and predicts the epistemic uncertainty from certain viewpoints. In addition, this model is used to generate predicted measurements during planning. To the best of our knowledge, this is the first work that reasons about classifier epistemic uncertainty within semantic SLAM and BSP.
Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status , but also be aware that its human teammates expectation of itself. Being aware of the human teammates expectation leads to robot behaviors that better align with human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot cooperation with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where human and robot behaviors can influence each other. We term this new measure as Interactive Plan Explicability. We compare the joint plan generated with the consideration of this measure using the fast forward planner (FF) with the plan created by FF without such consideration, as well as the plan created with actual human subjects. Results indicate that the explicability score of plans generated by our algorithm is comparable to the human plan, and better than the plan created by FF without considering the measure, implying that the plans created by our algorithms align better with expected joint plans of the human during execution. This can lead to more efficient collaboration in practice.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا