Zur Kurzanzeige

dc.contributor.authorTapia, Carlos Calvo
dc.contributor.authorVillacorta-Atienza, Jose Antonio
dc.contributor.authorKastalskiy, Innokentiy
dc.contributor.authorDiez-Hermano, Sergio 
dc.contributor.authorSanchez-Jimenez, Abel
dc.contributor.authorMakarov, Valeri A.
dc.date.accessioned2026-02-27T18:44:10Z
dc.date.available2026-02-27T18:44:10Z
dc.date.issued2018
dc.identifier.citationInternational Joint Conference on Neural Networks, Octubre 2018es
dc.identifier.urihttps://uvadoc.uva.es/handle/10324/83206
dc.description.abstractObject handling and manipulation are vital skills for humans and autonomous humanoid robots. The fundamental bases of how our brain solves such tasks remain largely unknown. Here we develop a novel approach that addresses the problem of limb movements in time-evolving situations at an abstract cognitive level. We exploit the concept of generalized cognitive maps constructed in the so-called handspace by a neural network simulating a wave simultaneously exploring different subject actions, independently on the number of objects in the workspace. We show that the approach is scalable to limbs with minimalistic and redundant numbers of degrees of freedom (DOF). It also allows biasing the effort of reaching a target among different DOFes
dc.format.mimetypeapplication/pdfes
dc.language.isoenges
dc.publisherIEEEes
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.titleCognitive Neural Network Driving DoF-Scalable Limbs in Time-Evolving Situationses
dc.typeinfo:eu-repo/semantics/articlees
dc.identifier.doi10.1109/IJCNN.2018.8489562es
dc.identifier.publicationfirstpage1es
dc.identifier.publicationlastpage7es
dc.peerreviewedSIes
dc.rightsAtribución-NoComercial-CompartirIgual 4.0 Internacional*
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones


Dateien zu dieser Ressource

Thumbnail

Das Dokument erscheint in:

Zur Kurzanzeige