Application of Reinforcement Learning in Adaptive Traffic Management
Keywords:
Reinforcement Learning, Deep Q Network, Adaptive Traffic Management, Urban mobility, Traffic OptimizationAbstract
Urban mobility systems are increasingly strained by rising population densities, unpredictable traffic fluctuations, and limitations in traditional fixed time signal control methods. Reinforcement Learning (RL) has emerged as a promising paradigm capable of enabling traffic signals to learn adaptive strategies based on real-time feedback. This study investigates the design, implementation, and evaluation of an RL- based adaptive traffic management system integrating a Deep Q-Network (DQN) agent trained to optimize
signal timings under dynamic vehicular loads. A simulated environment was constructed using SUMO (Simulation of Urban Mobility) to replicate multi-lane intersections, stochastic vehicle arrivals, lane constraints, and peak-hour surges. The agent’s reward structure was formulated around minimizing queue lengths, reducing average waiting time, and enhancing traffic throughput. Comparative evaluation with fixed-time and actuated control systems revealed substantial improvements: average waiting time decreased by 34 percent, queue lengths reduced by 28 percent, and intersection utilization increased by 22 percent. Real-world data from Bengaluru and Pune traffic profiles were integrated to enhance environmental realism. Findings indicate that RL based adaptive systems can outperform conventional controllers and provide scalable, city wide traffic-management optimization when integrated with sensor networks and vehicular communication infrastructures.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 VW Applied Sciences

This work is licensed under a Creative Commons Attribution 4.0 International License.