Do you want to publish a course? Click here

PerceMon: Online Monitoring for Perception Systems

58   0   0.0 ( 0 )
 Added by Anand Balakrishnan
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Perception algorithms in autonomous vehicles are vital for the vehicle to understand the semantics of its surroundings, including detection and tracking of objects in the environment. The outputs of these algorithms are in turn used for decision-making in safety-critical scenarios like collision avoidance, and automated emergency braking. Thus, it is crucial to monitor such perception systems at runtime. However, due to the high-level, complex representations of the outputs of perception systems, it is a challenge to test and verify these systems, especially at runtime. In this paper, we present a runtime monitoring tool, PerceMon that can monitor arbitrary specifications in Timed Quality Temporal Logic (TQTL) and its extensions with spatial operators. We integrate the tool with the CARLA autonomous vehicle simulation environment and the ROS middleware platform while monitoring properties on state-of-the-art object detection and tracking algorithms.

rate research

Read More

62 - Peter Du , Zhe Huang , Tianqi Liu 2019
As autonomous systems begin to operate amongst humans, methods for safe interaction must be investigated. We consider an example of a small autonomous vehicle in a pedestrian zone that must safely maneuver around people in a free-form fashion. We investigate two key questions: How can we effectively integrate pedestrian intent estimation into our autonomous stack. Can we develop an online monitoring framework to give formal guarantees on the safety of such human-robot interactions. We present a pedestrian intent estimation framework that can accurately predict future pedestrian trajectories given multiple possible goal locations. We integrate this into a reachability-based online monitoring scheme that formally assesses the safety of these interactions with nearly real-time performance (approximately 0.3 seconds). These techniques are integrated on a test vehicle with a complete in-house autonomous stack, demonstrating effective and safe interaction in real-world experiments.
Autonomous or teleoperated robots have been playing increasingly important roles in civil applications in recent years. Across the different civil domains where robots can support human operators, one of the areas where they can have more impact is in search and rescue (SAR) operations. In particular, multi-robot systems have the potential to significantly improve the efficiency of SAR personnel with faster search of victims, initial assessment and mapping of the environment, real-time monitoring and surveillance of SAR operations, or establishing emergency communication networks, among other possibilities. SAR operations encompass a wide variety of environments and situations, and therefore heterogeneous and collaborative multi-robot systems can provide the most advantages. In this paper, we review and analyze the existing approaches to multi-robot SAR support, from an algorithmic perspective and putting an emphasis on the methods enabling collaboration among the robots as well as advanced perception through machine vision and multi-agent active perception. Furthermore, we put these algorithms in the context of the different challenges and constraints that various types of robots (ground, aerial, surface or underwater) encounter in different SAR environments (maritime, urban, wilderness or other post-disaster scenarios). This is, to the best of our knowledge, the first review considering heterogeneous SAR robots across different environments, while giving two complimentary points of view: control mechanisms and machine perception. Based on our review of the state-of-the-art, we discuss the main open research questions, and outline our insights on the current approaches that have potential to improve the real-world performance of multi-robot SAR systems.
Recent years have witnessed an increasing interest in improving the perception performance of LiDARs on autonomous vehicles. While most of the existing works focus on developing novel model architectures to process point cloud data, we study the problem from an optimal sensing perspective. To this end, together with a fast evaluation function based on ray tracing within the perception region of a LiDAR configuration, we propose an easy-to-compute information-theoretic surrogate cost metric based on Probabilistic Occupancy Grids (POG) to optimize LiDAR placement for maximal sensing. We show a correlation between our surrogate function and common object detection performance metrics. We demonstrate the efficacy of our approach by verifying our results in a robust and reproducible data collection and extraction framework based on the CARLA simulator. Our results confirm that sensor placement is an important factor in 3D point cloud-based object detection and could lead to a variation of performance by 10% ~ 20% on the state-of-the-art perception algorithms. We believe that this is one of the first studies to use LiDAR placement to improve the performance of perception.
Autonomous vehicles rely on their perception systems to acquire information about their immediate surroundings. It is necessary to detect the presence of other vehicles, pedestrians and other relevant entities. Safety concerns and the need for accurate estimations have led to the introduction of Light Detection and Ranging (LiDAR) systems in complement to the camera or radar-based perception systems. This article presents a review of state-of-the-art automotive LiDAR technologies and the perception algorithms used with those technologies. LiDAR systems are introduced first by analyzing the main components, from laser transmitter to its beam scanning mechanism. Advantages/disadvantages and the current status of various solutions are introduced and compared. Then, the specific perception pipeline for LiDAR data processing, from an autonomous vehicle perspective is detailed. The model-driven approaches and the emerging deep learning solutions are reviewed. Finally, we provide an overview of the limitations, challenges and trends for automotive LiDARs and perception systems.
In this paper, we consider the dynamic multi-robot distribution problem where a heterogeneous group of networked robots is tasked to spread out and simultaneously move towards multiple moving task areas while maintaining connectivity. The heterogeneity of the system is characterized by various categories of units and each robot carries different numbers of units per category representing heterogeneous capabilities. Every task area with different importance demands a total number of units contributed by all of the robots within its area. Moreover, we assume the importance and the total number of units requested from each task area is initially unknown. The robots need first to explore, i.e., reach those areas, and then be allocated to the tasks so to fulfill the requirements. The multi-robot distribution problem is formulated as designing controllers to distribute the robots that maximize the overall task fulfillment while minimizing the traveling costs in presence of connectivity constraints. We propose a novel connectivity-aware multi-robot redistribution approach that accounts for dynamic task allocation and connectivity maintenance for a heterogeneous robot team. Such an approach could generate sub-optimal robot controllers so that the amount of total unfulfilled requirements of the tasks weighted by their importance is minimized and robots stay connected at all times. Simulation and numerical results are provided to demonstrate the effectiveness of the proposed approaches.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا