A Multi-Objective Optimal Control Method for Navigating Connected and Automated Vehicles at Signalized Intersections Based on Reinforcement Learning

The emergence and application of connected and automated vehicles (CAVs) have played a positive role in improving the efficiency of urban transportation and achieving sustainable development. To improve the traffic efficiency at signalized intersections in a connected environment while simultaneously reducing energy consumption and ensuring a more comfortable driving experience, this study investigates a flexible and real-time control method to navigate the CAVs at signalized intersections utilizing reinforcement learning (RL). Initially, control of CAVs at intersections is formulated as a Markov Decision Process (MDP) based on the vehicles’ motion state and the intersection environment. Subsequently, a comprehensive reward function is formulated considering energy consumption, efficiency, comfort, and safety. Then, based on the established environment and the twin delayed deep deterministic policy gradient (TD3) algorithm, a control algorithm for CAVs is designed. Finally, a simulation study is conducted using SUMO, with Lankershim Boulevard as the research scenario. Results indicate that the proposed methods yield a 13.77% reduction in energy consumption and a notable 18.26% decrease in travel time. Vehicles controlled by the proposed method also exhibit smoother driving trajectories.

» Author: Han Jiang

» Reference: doi: 10.3390/app14073124

» Publication Date: 08/04/2024

» More Information

« Go to Technological Watch





This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement Nº 768737


                   




AIMPLAS, Plastics Technology Centre

+34 96 136 60 40