ترغب بنشر مسار تعليمي؟ اضغط هنا

Increasing performance of electric vehicles in ride-hailing services using deep reinforcement learning

100   0   0.0 ( 0 )
 نشر من قبل Ruben Glatt
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

New forms of on-demand transportation such as ride-hailing and connected autonomous vehicles are proliferating, yet are a challenging use case for electric vehicles (EV). This paper explores the feasibility of using deep reinforcement learning (DRL) to optimize a driving and charging policy for a ride-hailing EV agent, with the goal of reducing costs and emissions while increasing transportation service provided. We introduce a data-driven simulation of a ride-hailing EV agent that provides transportation service and charges energy at congested charging infrastructure. We then formulate a test case for the sequential driving and charging decision making problem of the agent and apply DRL to optimize the agents decision making policy. We evaluate the performance against hand-written policies and show that our agent learns to act competitively without any prior knowledge.



قيم البحث

اقرأ أيضاً

The rapid growth of ride-hailing platforms has created a highly competitive market where businesses struggle to make profits, demanding the need for better operational strategies. However, real-world experiments are risky and expensive for these plat forms as they deal with millions of users daily. Thus, a need arises for a simulated environment where they can predict users reactions to changes in the platform-specific parameters such as trip fares and incentives. Building such a simulation is challenging, as these platforms exist within dynamic environments where thousands of users regularly interact with one another. This paper presents a framework to mimic and predict user, specifically driver, behaviors in ride-hailing services. We use a data-driven hybrid reinforcement learning and imitation learning approach for this. First, the agent utilizes behavioral cloning to mimic driver behavior using a real-world data set. Next, reinforcement learning is applied on top of the pre-trained agents in a simulated environment, to allow them to adapt to changes in the platform. Our framework provides an ideal playground for ride-hailing platforms to experiment with platform-specific parameters to predict drivers behavioral patterns.
152 - Chao Wang , Yi Hou , 2019
Ride-hailing services are growing rapidly and becoming one of the most disruptive technologies in the transportation realm. Accurate prediction of ride-hailing trip demand not only enables cities to better understand peoples activity patterns, but al so helps ride-hailing companies and drivers make informed decisions to reduce deadheading vehicle miles traveled, traffic congestion, and energy consumption. In this study, a convolutional neural network (CNN)-based deep learning model is proposed for multi-step ride-hailing demand prediction using the trip request data in Chengdu, China, offered by DiDi Chuxing. The CNN model is capable of accurately predicting the ride-hailing pick-up demand at each 1-km by 1-km zone in the city of Chengdu for every 10 minutes. Compared with another deep learning model based on long short-term memory, the CNN model is 30% faster for the training and predicting process. The proposed model can also be easily extended to make multi-step predictions, which would benefit the on-demand shared autonomous vehicles applications and fleet operators in terms of supply-demand rebalancing. The prediction error attenuation analysis shows that the accuracy stays acceptable as the model predicts more steps.
Hybrid-electric propulsion systems powered by clean energy derived from renewable sources offer a promising approach to decarbonise the worlds transportation systems. Effective energy management systems are critical for such systems to achieve optimi sed operational performance. However, developing an intelligent energy management system for applications such as ships operating in a highly stochastic environment and requiring concurrent control over multiple power sources presents challenges. This article proposes an intelligent energy management framework for hybrid-electric propulsion systems using deep reinforcement learning. In the proposed framework, a Twin-Delayed Deep Deterministic Policy Gradient agent is trained using an extensive volume of historical load profiles to generate a generic energy management strategy. The strategy, i.e. the core of the energy management system, can concurrently control multiple power sources in continuous state and action spaces. The proposed framework is applied to a coastal ferry model with multiple fuel cell clusters and a battery, achieving near-optimal cost performance when applied to novel future voyages.
Real-time vehicle dispatching operations in traditional car-sharing systems is an already computationally challenging scheduling problem. Electrification only exacerbates the computational difficulties as charge level constraints come into play. To o vercome this complexity, we employ an online minimum drift plus penalty (MDPP) approach for SAEV systems that (i) does not require a priori knowledge of customer arrival rates to the different parts of the system (i.e. it is practical from a real-world deployment perspective), (ii) ensures the stability of customer waiting times, (iii) ensures that the deviation of dispatch costs from a desirable dispatch cost can be controlled, and (iv) has a computational time-complexity that allows for real-time implementation. Using an agent-based simulator developed for SAEV systems, we test the MDPP approach under two scenarios with real-world calibrated demand and charger distributions: 1) a low-demand scenario with long trips, and 2) a high-demand scenario with short trips. The comparisons with other algorithms under both scenarios show that the proposed online MDPP outperforms all other algorithms in terms of both reduced customer waiting times and vehicle dispatching costs.
With the advances in the Internet of Things technology, electric vehicles (EVs) have become easier to schedule in daily life, which is reshaping the electric load curve. It is important to design efficient charging algorithms to mitigate the negative impact of EV charging on the power grid. This paper investigates an EV charging scheduling problem to reduce the charging cost while shaving the peak charging load, under unknown future information about EVs, such as arrival time, departure time, and charging demand. First, we formulate an EV charging problem to minimize the electricity bill of the EV fleet and study the EV charging problem in an online setting without knowing future information. We develop an actor-critic learning-based smart charging algorithm (SCA) to schedule the EV charging against the uncertainties in EV charging behaviors. The SCA learns an optimal EV charging strategy with continuous charging actions instead of discrete approximation of charging. We further develop a more computationally efficient customized actor-critic learning charging algorithm (CALC) by reducing the state dimension and thus improving the computational efficiency. Finally, simulation results show that our proposed SCA can reduce EVs expected cost by 24.03%, 21.49%, 13.80%, compared with the Eagerly Charging Algorithm, Online Charging Algorithm, RL-based Adaptive Energy Management Algorithm, respectively. CALC is more computationally efficient, and its performance is close to that of SCA with only a gap of 5.56% in the cost.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا