Mostrar el registro sencillo del ítem
dc.contributor.author | Nnadozie, Emmanuel Chibuikem | |
dc.contributor.author | Casaseca de la Higuera, Juan Pablo | |
dc.contributor.author | Iloanusi, Ogechukwu | |
dc.contributor.author | Ani, Ozoemena | |
dc.contributor.author | Alberola López, Carlos | |
dc.date.accessioned | 2024-03-14T12:36:17Z | |
dc.date.available | 2024-03-14T12:36:17Z | |
dc.date.issued | 2023 | |
dc.identifier.citation | Multimedia Tools and Applications, 2023. | es |
dc.identifier.issn | 1380-7501 | es |
dc.identifier.uri | https://uvadoc.uva.es/handle/10324/66697 | |
dc.description | Producción Científica | es |
dc.description.abstract | Deep learning-based object detection models have become a preferred choice for crop detection tasks in crop monitoring activities due to their high accuracy and generalization capabilities. However, their high computational demand and large memory footprint pose a challenge for use on mobile embedded devices deployed in crop monitoring settings. Vari- ous approaches have been taken to minimize the computational cost and reduce the size of object detection models such as channel and layer pruning, detection head searching, back- bone optimization, etc. In this work, we approached computational lightening, model com- pression, and speed improvement by discarding one or more of the three detection scales of the YOLOv5 object detection model. Thus, we derived up to five separate fast and light models, each with only one or two detection scales. To evaluate the new models for a real crop monitoring use case, the models were deployed on NVIDIA Jetson nano and NVIDIA Jetson Orin devices. The new models achieved up to 21.4% reduction in giga floating-point operations per second (GFLOPS), 31.9% reduction in number of parameters, 30.8% reduc- tion in model size, 28.1% increase in inference speed, with only a small average accuracy drop of 3.6%. These new models are suitable for crop detection tasks since the crops are usually of similar sizes due to the high likelihood of being in the same growth stage, thus, making it sufficient to detect the crops with just one or two detection scales. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | eng | es |
dc.publisher | Springer | es |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.subject.classification | Object detection | es |
dc.subject.classification | Model simplification | es |
dc.subject.classification | Crop monitoring | es |
dc.subject.classification | YOLOv5 | es |
dc.subject.classification | Deep learning | es |
dc.title | Simplifying YOLOv5 for deployment in a real crop monitoring setting | es |
dc.type | info:eu-repo/semantics/article | es |
dc.rights.holder | © 2023 The Author(s) | es |
dc.identifier.doi | 10.1007/s11042-023-17435-x | es |
dc.relation.publisherversion | https://link.springer.com/article/10.1007/s11042-023-17435-x | es |
dc.identifier.publicationtitle | Multimedia Tools and Applications | es |
dc.peerreviewed | SI | es |
dc.description.project | Tertiary Education Trust Fund - TETFUND NRF 2020 with grant number TETF/ES/DR&D-CE/NRF2020/SETI/88/ VOL.1 | es |
dc.description.project | Agencia Estatal de Investigación (grant PID2020-115339RB-I00 and CPP2021-008880) | es |
dc.description.project | European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no.101008297 | es |
dc.description.project | Publicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCLE | es |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/H2020/101008297 | |
dc.identifier.essn | 1573-7721 | es |
dc.rights | Atribución 4.0 Internacional | * |
dc.type.hasVersion | info:eu-repo/semantics/publishedVersion | es |
dc.subject.unesco | 33 Ciencias Tecnológicas | es |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
La licencia del ítem se describe como Atribución 4.0 Internacional