Mostrar el registro sencillo del ítem
dc.contributor.author | Duque Domingo, Jaime | |
dc.contributor.author | Gómez García-Bermejo, Jaime | |
dc.contributor.author | Zalama Casanova, Eduardo | |
dc.date.accessioned | 2021-09-07T11:27:16Z | |
dc.date.available | 2021-09-07T11:27:16Z | |
dc.date.issued | 2020 | |
dc.identifier.citation | Frontiers in Neurorobotics, 2020, vol. 14, n. 34. 19 p. | es |
dc.identifier.issn | 1662-5218 | es |
dc.identifier.uri | https://uvadoc.uva.es/handle/10324/48620 | |
dc.description | Producción Científica | es |
dc.description.abstract | When there is an interaction between a robot and a person, gaze control is very important for face-to-face communication. However, when a robot interacts with several people, neurorobotics plays an important role to determine the person to look at and those to pay attention to among the others. There are several factors which can influence the decision: who is speaking, who he/she is speaking to, where people are looking, if the user wants to attract attention, etc. This article presents a novel method to decide who to pay attention to when a robot interacts with several people. The proposed method is based on a competitive network that receives different stimuli (look, speak, pose, hoard conversation, habituation, etc.) that compete with each other to decide who to pay attention to. The dynamic nature of this neural network allows a smooth transition in the focus of attention to a significant change in stimuli. A conversation is created between different participants, replicating human behavior in the robot. The method deals with the problem of several interlocutors appearing and disappearing from the visual field of the robot. A robotic head has been designed and built and a virtual agent projected on the robot's face display has been integrated with the gaze control. Different experiments have been carried out with that robotic head integrated into a ROS architecture model. The work presents the analysis of the method, how the system has been integrated with the robotic head and the experiments and results obtained. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | eng | es |
dc.publisher | Frontiers | es |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject.classification | Human-robot interaction | es |
dc.subject.classification | Interacción hombre-robot | es |
dc.subject.classification | Humanoid robots | es |
dc.subject.classification | Robots humanoides | es |
dc.subject.classification | Computer vision | es |
dc.subject.classification | Visión artificial | es |
dc.title | Gaze control of a robotic head for realistic interaction with humans | es |
dc.type | info:eu-repo/semantics/article | es |
dc.rights.holder | © 2020 Frontiers | es |
dc.identifier.doi | 10.3389/fnbot.2020.00034 | es |
dc.relation.publisherversion | https://www.frontiersin.org/articles/10.3389/fnbot.2020.00034/full | es |
dc.peerreviewed | SI | es |
dc.description.project | Ministerio de Ciencia, Innovación y Universidades (grant RTI2018-096652-B-I00) | es |
dc.description.project | Junta de Castilla y León - Fondo Europeo de Desarrollo Regional (grant VA233P18) | es |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |
dc.type.hasVersion | info:eu-repo/semantics/publishedVersion | es |
dc.subject.unesco | 1203.04 Inteligencia Artificial | es |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
La licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional