نوع مقاله : مقاله پژوهشی
موضوعات
عنوان مقاله English
نویسندگان English
With the advancement of autonomous vehicle technologies, decision-making in complex traffic scenarios has become a significant challenge. This study models abnormal driver behaviors—such as sudden lane changes, unusual speeds, and erratic reactions—using the SUMO simulation environment. To enhance autonomous vehicles' lane-changing decisions, Deep Q-Network (DQN) reinforcement learning was employed. The simulations included various driver types, such as normal, aggressive, overly cautious, and unpredictable drivers. Results indicate that the lane-changing success rate increased from 40% in the initial episodes to 80% in the final episodes, while collision rates dropped from 25% to below 10%. Rewards were defined based on speed (above 20 km/h: +10), lane position (center lane: +15), and collisions (-50). However, cumulative rewards showed high variance during early episodes and became more stable as learning progressed, reflecting the challenges of reinforcement learning in dynamic and unpredictable environments. Analysis reveals that the learning agent's performance remains unstable in unexpected situations, suggesting a need for further optimization. The study also proposes that more advanced methods, such as distributional reinforcement learning or integrating driver behavior prediction models, could improve decision-making. Ultimately, this research highlights the importance of more accurate modeling of real-world traffic conditions and hybrid approaches to ensure learning stability in autonomous vehicles.
کلیدواژهها English