Mostrar el registro sencillo del ítem

dc.contributor.authorNnadozie, Emmanuel Chibuikem
dc.contributor.authorCasaseca de la Higuera, Juan Pablo 
dc.contributor.authorIloanusi, Ogechukwu
dc.contributor.authorAni, Ozoemena
dc.contributor.authorAlberola López, Carlos 
dc.date.accessioned2024-03-14T12:36:17Z
dc.date.available2024-03-14T12:36:17Z
dc.date.issued2023
dc.identifier.citationMultimedia Tools and Applications, 2023.es
dc.identifier.issn1380-7501es
dc.identifier.urihttps://uvadoc.uva.es/handle/10324/66697
dc.descriptionProducción Científicaes
dc.description.abstractDeep learning-based object detection models have become a preferred choice for crop detection tasks in crop monitoring activities due to their high accuracy and generalization capabilities. However, their high computational demand and large memory footprint pose a challenge for use on mobile embedded devices deployed in crop monitoring settings. Vari- ous approaches have been taken to minimize the computational cost and reduce the size of object detection models such as channel and layer pruning, detection head searching, back- bone optimization, etc. In this work, we approached computational lightening, model com- pression, and speed improvement by discarding one or more of the three detection scales of the YOLOv5 object detection model. Thus, we derived up to five separate fast and light models, each with only one or two detection scales. To evaluate the new models for a real crop monitoring use case, the models were deployed on NVIDIA Jetson nano and NVIDIA Jetson Orin devices. The new models achieved up to 21.4% reduction in giga floating-point operations per second (GFLOPS), 31.9% reduction in number of parameters, 30.8% reduc- tion in model size, 28.1% increase in inference speed, with only a small average accuracy drop of 3.6%. These new models are suitable for crop detection tasks since the crops are usually of similar sizes due to the high likelihood of being in the same growth stage, thus, making it sufficient to detect the crops with just one or two detection scales.es
dc.format.mimetypeapplication/pdfes
dc.language.isoenges
dc.publisherSpringeres
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subject.classificationObject detectiones
dc.subject.classificationModel simplificationes
dc.subject.classificationCrop monitoringes
dc.subject.classificationYOLOv5es
dc.subject.classificationDeep learninges
dc.titleSimplifying YOLOv5 for deployment in a real crop monitoring settinges
dc.typeinfo:eu-repo/semantics/articlees
dc.rights.holder© 2023 The Author(s)es
dc.identifier.doi10.1007/s11042-023-17435-xes
dc.relation.publisherversionhttps://link.springer.com/article/10.1007/s11042-023-17435-xes
dc.identifier.publicationtitleMultimedia Tools and Applicationses
dc.peerreviewedSIes
dc.description.projectTertiary Education Trust Fund - TETFUND NRF 2020 with grant number TETF/ES/DR&D-CE/NRF2020/SETI/88/ VOL.1es
dc.description.projectAgencia Estatal de Investigación (grant PID2020-115339RB-I00 and CPP2021-008880)es
dc.description.projectEuropean Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no.101008297es
dc.description.projectPublicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCLEes
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/101008297
dc.identifier.essn1573-7721es
dc.rightsAtribución 4.0 Internacional*
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones
dc.subject.unesco33 Ciencias Tecnológicases


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem