<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-27T20:24:27Z</responseDate><request verb="GetRecord" identifier="oai:uvadoc.uva.es:10324/21061" metadataPrefix="etdms">https://uvadoc.uva.es/oai/request</request><GetRecord><record><header><identifier>oai:uvadoc.uva.es:10324/21061</identifier><datestamp>2021-06-23T11:20:23Z</datestamp><setSpec>com_10324_1168</setSpec><setSpec>com_10324_931</setSpec><setSpec>com_10324_894</setSpec><setSpec>col_10324_1302</setSpec></header><metadata><thesis xmlns="http://www.ndltd.org/standards/metadata/etdms/1.0/" xmlns:doc="http://www.lyncode.com/xoai" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.ndltd.org/standards/metadata/etdms/1.0/ http://www.ndltd.org/standards/metadata/etdms/1.0/etdms.xsd">
<title>Dynamic Facial Emotion Recognition Oriented to HCI Applications</title>
<creator>Marcos Pablos, Samuel</creator>
<creator>Gómez García-Bermejo, Jaime</creator>
<creator>Zalama Casanova, Eduardo</creator>
<creator>López, Joaquín</creator>
<subject>Robots</subject>
<subject>Realidad virtual</subject>
<description>Producción Científica</description>
<description>As part of a multimodal animated interface previously presented in [38], in this paper we describe a method for dynamic recognition of displayed facial emotions on low resolution streaming images. First, we address the detection of Action Units of the Facial Action Coding System upon Active Shape Models and Gabor filters. Normalized outputs of the Action Unit recognition step are then used as inputs for a neural network which is based on real cognitive systems architecture, and consists on a habituation network plus a competitive network. Both the competitive and the habituation layer use differential equations thus taking into account the dynamic information of facial expressions through time. Experimental results carried out on live video sequences and on the Cohn-Kanade face database show that the proposed method provides high recognition hit rates.</description>
<date>2016-11-23</date>
<date>2016-11-23</date>
<date>2015</date>
<type>info:eu-repo/semantics/article</type>
<identifier>Samuel Marcos Pablos, Jaime Gómez García-Bermejo, Eduardo Zalama Casanova, Joaquín López. Dynamic Facial Emotion Recognition Oriented to HCI Applications. Interacting with Computers. 2015, vol. 27,  p. 99-119</identifier>
<identifier>0953-5438</identifier>
<identifier>http://uvadoc.uva.es/handle/10324/21061</identifier>
<identifier>10.1093/iwc/iwt057</identifier>
<identifier>99</identifier>
<identifier>2</identifier>
<identifier>119</identifier>
<identifier>Interacting with Computers</identifier>
<identifier>27</identifier>
<language>eng</language>
<relation>https://iwc.oxfordjournals.org/</relation>
<rights>info:eu-repo/semantics/openAccess</rights>
<rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</rights>
<rights>Attribution-NonCommercial-NoDerivatives 4.0 International</rights>
<publisher>Oxford University Press</publisher>
</thesis></metadata></record></GetRecord></OAI-PMH>