Document Type

Article

Publication Date

2026

DOI

10.1016/j.trip.2025.101795

Publication Title

Transportation Research Interdisciplinary Perspectives

Volume

36

Pages

101795

Abstract

As electric vehicles (EVs) gain popularity, efficient routing and charging solutions remain challenging due to time-dependent travel variability, sparse charging infrastructure, and heterogeneous user preferences. To address these challenges, this paper introduces a decision-support system that integrates three complementary methods: Temporal Multimodal Multivariate Learning (TMML) for real-time characterization of travel time uncertainty, Time-Dependent Shortest Path (TDSP) for reliability-aware route choice, and Deep Q-Network (DQN) reinforcement learning for adaptive charging decisions in sparse infrastructure environments. TMML updates link-level travel time distributions in real-time through Bayesian inference with cluster-based propagation, reducing uncertainties across the network. TDSP leverages these updated distributions to estimate remaining travel time and reliability scores for route planning. DQN learns optimal charging policies by determining when to charge, how much to charge (partial charging at 25%, 50%, 75%, or 100% levels), and which route to take based on battery state, traffic patterns, and available stationary charging stations (SCSs) and mobile charging infrastructure—including Mobile Energy Distributors (MEDs) and Dynamic Inductive Charging (DIC). DQN training uses simulation-based learning from actual traffic patterns of the Washington, DC metropolitan region, allowing the agent to explore charging-route pairs and discover efficient solutions through trial and error. To accommodate heterogeneous user preferences, the system calculates multiple Pareto-optimal solutions that trade off travel time, charging cost, battery safety, and route reliability, enabling users to select alternatives that match their current priorities without specifying preference weights in advance.

Rights

© 2026 The Authors.

This is an open access article under the Creative Commons Attribution 4.0 International (CC BY 4.0) License.

Data Availability

Article states: "Data will be made available on request."

ORCID

0009-0008-3035-1625 (Ghahfarokhi), 0000-0002-1490-5404 (Park)

Original Publication Citation

Ghahfarokhi, M. F., Park, H., Pandey, V., & Yoon, G. (2026). Adaptive electric vehicle routing and charging with deep reinforcement learning. Transportation Research Interdisciplinary Perspectives, 36, Article 101795. https://doi.org/10.1016/j.trip.2025.101795

Share

COinS