Mostrar el registro sencillo del ítem
dc.contributor.author | Veganzones, Miguel | |
dc.contributor.author | Cisnal De La Rica, Ana | |
dc.contributor.author | Fuente López, Eusebio de la | |
dc.contributor.author | Fraile Marinero, Juan Carlos | |
dc.date.accessioned | 2025-01-13T09:19:20Z | |
dc.date.available | 2025-01-13T09:19:20Z | |
dc.date.issued | 2024-12-05 | |
dc.identifier.citation | Applied Science, December, vol. 14, n. 23, p. 11357 | es |
dc.identifier.issn | 2076-3417 | es |
dc.identifier.uri | https://uvadoc.uva.es/handle/10324/73711 | |
dc.description.abstract | Augmented reality applications involving human interaction with virtual objects often rely on segmentation-based hand detection techniques. Semantic segmentation can then be enhanced with instance-specific information to model complex interactions between objects, but extracting such information typically increases the computational load significantly. This study proposes a training strategy that enables conventional semantic segmentation networks to preserve some instance information during inference. This is accomplished by introducing pixel weight maps into the loss calculation, increasing the importance of boundary pixels between instances. We compare two common fully convolutional network (FCN) architectures, U-Net and ResNet, and fine-tune the fittest to improve segmentation results. Although the resulting model does not reach state-of-the-art segmentation performance on the EgoHands dataset, it preserves some instance information with no computational overhead. As expected, degraded segmentations are a necessary trade-off to preserve boundaries when instances are close together. This strategy allows approximating instance segmentation in real-time using non-specialized hardware, obtaining a unique blob for an instance with an intersection over union greater than 50% in 79% of the instances in our test set. A simple FCN, typically used for semantic segmentation, has shown promising instance segmentation results by introducing per-pixel weight maps during training for light-weight applications. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | spa | es |
dc.publisher | MDPI | es |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject.classification | computer vision | es |
dc.subject.classification | convolutional neural networks | es |
dc.subject.classification | deep learning | es |
dc.subject.classification | hand segmentation | es |
dc.subject.classification | semantic segmentation | es |
dc.title | Training Fully Convolutional Neural Networks for Lightweight, Non-Critical Instance Segmentation Applications | es |
dc.type | info:eu-repo/semantics/article | es |
dc.identifier.doi | 10.3390/app142311357 | es |
dc.relation.publisherversion | https://www.mdpi.com/2076-3417/14/23/11357 | es |
dc.identifier.publicationfirstpage | 11357 | es |
dc.identifier.publicationissue | 23 | es |
dc.identifier.publicationtitle | Applied Sciences | es |
dc.identifier.publicationvolume | 14 | es |
dc.peerreviewed | SI | es |
dc.identifier.essn | 2076-3417 | es |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |
dc.type.hasVersion | info:eu-repo/semantics/publishedVersion | es |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
La licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional