dc.contributor.author | Li, Peisong | |
dc.contributor.author | Wang, Xinheng | |
dc.contributor.author | Li, Changle | |
dc.contributor.author | Iqbal, Muddesar | |
dc.contributor.author | Al-Dulaimi, Anwer | |
dc.contributor.author | I, Chih-Lin | |
dc.contributor.author | Casaseca de la Higuera, Juan Pablo | |
dc.date.accessioned | 2025-10-16T10:14:21Z | |
dc.date.available | 2025-10-16T10:14:21Z | |
dc.date.issued | 2025 | |
dc.identifier.citation | IEEE Transactions on Intelligent Transportation Systems, 2025, p. 1-30. | es |
dc.identifier.issn | 1524-9050 | es |
dc.identifier.uri | https://uvadoc.uva.es/handle/10324/78725 | |
dc.description | Producción Científica | es |
dc.description.abstract | With the development of intelligent transportation systems, vehicular edge computing (VEC) has played a pivotal role by integrating computation, storage, and analytics closer to the vehicles. VEC represents a paradigm shift towards real-time data processing and intelligent decision-making, overcoming challenges associated with latency and resource constraints. In VEC scenarios, the efficient scheduling and allocation of computing resources are fundamental research areas, enabling real-time processing of vehicular tasks and intelligent decision-making. This paper provides a comprehensive review of the latest research in Deep Reinforcement Learning (DRL)-based task scheduling and resource allocation in VEC environments. Firstly, the paper outlines the development of VEC and introduces the core concepts of DRL, shedding light on their growing importance in the dynamic VEC landscape. Secondly, the state-of-the-art research in DRL-based task scheduling and resource allocation is categorized, reviewed, and discussed. Finally, the paper discusses current challenges in the field, offering insights into the promising future of VEC applications within the realm of intelligent transportation systems. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | eng | es |
dc.publisher | IEEE | es |
dc.rights.accessRights | info:eu-repo/semantics/restrictedAccess | es |
dc.subject | Computación periférica vehicular | es |
dc.subject | Aprendizaje profundo por refuerzo | es |
dc.subject | Programación de tareas | es |
dc.subject | Asignación de recursos | es |
dc.title | Deep Reinforcement Learning-Based Task Scheduling and Resource Allocation for Vehicular Edge Computing: A Survey | es |
dc.type | info:eu-repo/semantics/article | es |
dc.rights.holder | © 2025 IEEE. Todos los derechos reservados | es |
dc.identifier.doi | 10.1109/TITS.2025.3607910 | es |
dc.relation.publisherversion | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11173255 | es |
dc.identifier.publicationfirstpage | 1 | es |
dc.identifier.publicationlastpage | 30 | es |
dc.identifier.publicationtitle | IEEE Transactions on Intelligent Transportation Systems | es |
dc.peerreviewed | SI | es |
dc.description.project | Prince Sultan Defence Studies and Research Centre (PSDSRC): PID000085_01_04 and PID-000085_01_03 | es |
dc.description.project | National Natural Science Foundation of China: 52175030 and 62231020 | es |
dc.description.project | Innovation Capability Support Program of Shaanxi: 2024RS-CXTD-0 | es |
dc.description.project | Technology Innovation Leading Program of Shaanxi: 2023KXJ-116 | es |
dc.description.project | European Union Horizon 2020 Research and Innovation Program: Marie Sklodowska-Curie (101008297) | es |
dc.identifier.essn | 1558-0016 | es |
dc.type.hasVersion | info:eu-repo/semantics/acceptedVersion | es |