Do you want to publish a course? Click here

A Study of the Minimum Safe Distance between Human Driven and Driverless Cars Using Safe Distance Model

220   0   0.0 ( 0 )
 Added by Tesfaye Yimer
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

When driving,it is vital to maintain the right following distance between the vehicles to avoid rear-end collisions. The minimum safe distance depends on many factors, however, in this study the safe distance between the human-driven vehicles and a fully autonomous vehicle at a sudden stop by an automatic emergency brake was studied based on the human driver ability to react in an accident, the vehicles braking system performance, and the speed of vehicles. For this approach, a safe distance car-following model was proposed to describe the safe distance between vehicles on a single lane dry road under conditions where both vehicles keep moving at a constant speed, and a lead autonomous vehicle suddenly stops by automatic emergency braking at an imminent incident. The proposed model then finally was being tested using MATLAB simulation, and results showed that confirmed the effectiveness of this model and the influence of driving speed and inter-vehicle distance on the rear-end collision was also indicated as well compared with the two and three seconds rule of safe following distance. The three seconds safe distance following rules is safe to be applied for all speed limits; however, the two seconds can be used on speed limits up to 45 Km/hr. A noticeable increase in rear-end collision was observed according to the simulation results if a car follows a driverless vehicle with two seconds rule above 45 km/hr.



rate research

Read More

Sampling-based methods such as Rapidly-exploring Random Trees (RRTs) have been widely used for generating motion paths for autonomous mobile systems. In this work, we extend time-based RRTs with Control Barrier Functions (CBFs) to generate, safe motion plans in dynamic environments with many pedestrians. Our framework is based upon a human motion prediction model which is well suited for indoor narrow environments. We demonstrate our approach on a high-fidelity model of the Toyota Human Support Robot navigating in narrow corridors. We show in three scenarios that our proposed online method can navigate safely in the presence of moving agents with unknown dynamics.
Human motion prediction is non-trivial in modern industrial settings. Accurate prediction of human motion can not only improve efficiency in human robot collaboration, but also enhance human safety in close proximity to robots. Among existing prediction models, the parameterization and identification methods of those models vary. It remains unclear what is the necessary parameterization of a prediction model, whether online adaptation of the model is necessary, and whether prediction can help improve safety and efficiency during human robot collaboration. These problems result from the difficulty to quantitatively evaluate various prediction models in a closed-loop fashion in real human-robot interaction settings. This paper develops a method to evaluate the closed-loop performance of different prediction models. In particular, we compare models with different parameterizations and models with or without online parameter adaptation. Extensive experiments were conducted on a human robot collaboration platform. The experimental results demonstrated that human motion prediction significantly enhanced the collaboration efficiency and human safety. Adaptable prediction models that were parameterized by neural networks achieved the best performance.
Industrial standards define safety requirements for Human-Robot Collaboration (HRC) in industrial manufacturing. The standards particularly require real-time monitoring and securing of the minimum protective distance between a robot and an operator. In this work, we propose a depth-sensor based model for workspace monitoring and an interactive Augmented Reality (AR) User Interface (UI) for safe HRC. The AR UI is implemented on two different hardware: a projector-mirror setup anda wearable AR gear (HoloLens). We experiment the workspace model and UIs for a realistic diesel motor assembly task. The AR-based interactive UIs provide 21-24% and 57-64% reduction in the task completion and robot idle time, respectively, as compared to a baseline without interaction and workspace sharing. However, subjective evaluations reveal that HoloLens based AR is not yet suitable for industrial manufacturing while the projector-mirror setup shows clear improvements in safety and work ergonomics.
The need to guarantee safety of collaborative robots limits their performance, in particular, their speed and hence cycle time. The standard ISO/TS 15066 defines the Power and Force Limiting operation mode and prescribes force thresholds that a moving robot is allowed to exert on human body parts during impact, along with a simple formula to obtain maximum allowed speed of the robot in the whole workspace. In this work, we measure the forces exerted by two collaborative manipulators (UR10e and KUKA LBR iiwa) moving downward against an impact measuring device. First, we empirically show that the impact forces can vary by more than 100 percent within the robot workspace. The forces are negatively correlated with the distance from the robot base and the height in the workspace. Second, we present a data-driven model, 3D Collision-Force-Map, predicting impact forces from distance, height, and velocity and demonstrate that it can be trained on a limited number of data points. Third, we analyze the force evolution upon impact and find that clamping never occurs for the UR10e. We show that formulas relating robot mass, velocity, and impact forces from ISO/TS 15066 are insufficient -- leading both to significant underestimation and overestimation and thus to unnecessarily long cycle times or even dangerous applications. We propose an empirical method that can be deployed to quickly determine the optimal speed and position where a task can be safely performed with maximum efficiency.
In recent years, reinforcement learning and learning-based control -- as well as the study of their safety, crucial for deployment in real-world robots -- have gained significant traction. However, to adequately gauge the progress and applicability of new results, we need the tools to equitably compare the approaches proposed by the controls and reinforcement learning communities. Here, we propose a new open-source benchmark suite, called safe-control-gym. Our starting point is OpenAIs Gym API, which is one of the de facto standard in reinforcement learning research. Yet, we highlight the reasons for its limited appeal to control theory researchers -- and safe control, in particular. E.g., the lack of analytical models and constraint specifications. Thus, we propose to extend this API with (i) the ability to specify (and query) symbolic models and constraints and (ii) introduce simulated disturbances in the control inputs, measurements, and inertial properties. We provide implementations for three dynamic systems -- the cart-pole, 1D, and 2D quadrotor -- and two control tasks -- stabilization and trajectory tracking. To demonstrate our proposal -- and in an attempt to bring research communities closer together -- we show how to use safe-control-gym to quantitatively compare the control performance, data efficiency, and safety of multiple approaches from the areas of traditional control, learning-based control, and reinforcement learning.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا