| dc.contributor.author | Duque Domingo, Jaime | |
| dc.contributor.author | Caccavale, Riccardo | |
| dc.contributor.author | Finzi, Alberto | |
| dc.contributor.author | Zalama Casanova, Eduardo | |
| dc.contributor.author | Gómez García-Bermejo, Jaime | |
| dc.date.accessioned | 2025-11-06T07:35:07Z | |
| dc.date.available | 2025-11-06T07:35:07Z | |
| dc.date.issued | 2025 | |
| dc.identifier.citation | Journal of Intelligent Manufacturing, 2025. | es |
| dc.identifier.issn | 0956-5515 | es |
| dc.identifier.uri | https://uvadoc.uva.es/handle/10324/79353 | |
| dc.description | Producción Científica | es |
| dc.description.abstract | We present a framework that enables a collaborative robot to rapidly replicate structured manipulation tasks demonstrated by a human operator through a single 3D video recording. The system combines object segmentation with hand and gaze tracking to analyze and interpret the video demonstrations. The manipulation task is decomposed into primitive actions that leverage 3D features, including the proximity of the hand trajectory to objects, the speed of the trajectory, and the user’s gaze. In line with the One-Shot Learning paradigm, we introduce a novel object segmentation method called SAM+CP-CVV, ensuring that objects appearing in the demonstration require labeling only once. Segmented manipulation primitives are also associated with object-related data, facilitating the implementation of the corresponding robotic actions. Once these action primitives are extracted and recorded, they can be recombined to generate a structured robotic task ready for execution. This framework is particularly well-suited for flexible manufacturing environments, where operators can rapidly and incrementally instruct collaborative robots through video-demonstrated tasks. We discuss the approach applied to heterogeneous manipulation tasks and show that the proposed method can be transferred to different types of robots and manipulation scenarios. | es |
| dc.format.mimetype | application/pdf | es |
| dc.language.iso | eng | es |
| dc.publisher | Springer Nature | es |
| dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
| dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
| dc.subject | Aprendizaje único | es |
| dc.subject | Aprendizaje robótico | es |
| dc.subject | Aprendizaje por demostración | es |
| dc.subject | Segmentación de actividades | es |
| dc.title | One-shot learning for rapid generation of structured robotic manipulation tasks from 3D video demonstrations | es |
| dc.type | info:eu-repo/semantics/article | es |
| dc.rights.holder | © The Author(s) 2025 | es |
| dc.identifier.doi | 10.1007/s10845-025-02673-7 | es |
| dc.relation.publisherversion | https://link.springer.com/article/10.1007/s10845-025-02673-7 | es |
| dc.identifier.publicationtitle | Journal of Intelligent Manufacturing | es |
| dc.peerreviewed | SI | es |
| dc.description.project | Ministerio de Ciencia e Innovacion (MCIN) / Agencia Estatal de Investigación (AEI): PID2021-123020OB-I00 (MCIN/AEI/10.13039/501100011033/FEDER, UE) | es |
| dc.description.project | Consejería de Familia of the Junta de Castilla y León: EIAROB | es |
| dc.description.project | EU Horizon INVERSE: 101136067 | es |
| dc.description.project | EU Horizon Melody: P2022XALNS | es |
| dc.description.project | EU Horizon euROBIN: 101070596 | |
| dc.description.project | Ministero dell'Università e della Ricerca: PE15 ASI/MUR | |
| dc.description.project | Open access funding provided by FEDER European Funds and the Junta de Castilla y León under the Research and Innovation Strategy for Smart Specialization (RIS3) of Castilla y León 2021-2027. | |
| dc.identifier.essn | 1572-8145 | es |
| dc.rights | Attribution 4.0 Internacional | * |
| dc.type.hasVersion | info:eu-repo/semantics/publishedVersion | es |
| dc.subject.unesco | 1203 Ciencia de Los Ordenadores | |
| dc.subject.unesco | 1203.04 Inteligencia Artificial | |