Mostrar el registro sencillo del ítem

dc.contributor.authorFerens Michalek, Mieszko Jan
dc.contributor.authorHortelano Haro, Diego
dc.contributor.authorMiguel Jiménez, Ignacio de 
dc.contributor.authorDurán Barroso, Ramón José 
dc.contributor.authorAguado Manzano, Juan Carlos 
dc.contributor.authorRuiz Pérez, Lidia 
dc.contributor.authorMerayo Álvarez, Noemí 
dc.contributor.authorFernández Reguero, Patricia 
dc.contributor.authorLorenzo Toledo, Rubén Mateo 
dc.contributor.authorAbril Domingo, Evaristo José 
dc.date.accessioned2023-10-26T10:05:40Z
dc.date.available2023-10-26T10:05:40Z
dc.date.issued2022
dc.identifier.citation2022 International Balkan Conference on Communications and Networking (BalkanCom), Sarajevo, Bosnia and Herzegovina, 2022, pp. 31-35es
dc.identifier.urihttps://uvadoc.uva.es/handle/10324/62366
dc.descriptionProducción Científicaes
dc.description.abstractAn observable trend in recent years is the increasing demand for more complex services designed to be used with portable or automotive embedded devices. The problem is that these devices may lack the computational resources necessary to comply with service requirements. To solve it, cloud and edge computing, and in particular, the recent multi-access edge computing (MEC) paradigm, have been proposed. By offloading the processing of computational tasks from devices or vehicles to an external network, a larger amount of computational resources, placed in different locations, becomes accessible. However, this in turn creates the issue of deciding where each task should be executed. In this paper, we model the problem of computation offloading of vehicular applications to solve it using deep reinforcement learning (DRL) and evaluate the performance of different DRL algorithms and heuristics, showing the advantages of the former methods. Moreover, the impact of two scheduling techniques in computing nodes and two reward strategies in the DRL methods are also analyzed and discussed.es
dc.format.extent5 p.es
dc.format.mimetypeapplication/pdfes
dc.language.isoenges
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)es
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subject.classificationDeep Reinforcement Learninges
dc.subject.classificationVehicular Applicationses
dc.subject.classificationComputation Offloadinges
dc.subject.classificationEdge Computinges
dc.titleDeep Reinforcement Learning Applied to Computation Offloading of Vehicular Applications: A Comparisones
dc.typeinfo:eu-repo/semantics/conferenceObjectes
dc.identifier.doi10.1109/BalkanCom55633.2022.9900545es
dc.relation.publisherversionhttps://ieeexplore.ieee.org/abstract/document/9900545es
dc.title.event2022 International Balkan Conference on Communications and Networking (BalkanCom)es
dc.description.projectConsejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)es
dc.description.projectMinisterio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42 financiado por MCIN/AEI/10.13039/501100011033 y RED2018-102585-T); EU H2020 GA no. 856967.es
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.type.hasVersioninfo:eu-repo/semantics/acceptedVersiones


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem