Mostrar el registro sencillo del ítem

dc.contributor.authorChaves-Villota, Andrea
dc.contributor.authorJimenez-Martín, Ana
dc.contributor.authorJojoa Acosta, Mario Fernando 
dc.contributor.authorBahillo Martínez, Alfonso 
dc.contributor.authorGarcía-Domínguez, Juan Jesús
dc.date.accessioned2025-11-28T11:03:43Z
dc.date.available2025-11-28T11:03:43Z
dc.date.issued2026
dc.identifier.citationComputer Speech & Language Volume, 2026, vol. 96, p. 101873es
dc.identifier.issn0885-2308es
dc.identifier.urihttps://uvadoc.uva.es/handle/10324/80150
dc.descriptionProducción Científicaes
dc.description.abstractEmotion Recognition (ER) has gained significant attention due to its importance in advanced human-machine interaction and its widespread real-world applications. In recent years, research on ER systems has focused on multiple key aspects, including the development of high-quality emotional databases, the selection of robust feature representations, and the implementation of advanced classifiers leveraging AI-based techniques. Despite this progress in research, ER still faces significant challenges and gaps that must be addressed to develop accurate and reliable systems. To systematically assess these critical aspects, particularly those centered on AI-based techniques, we employed the PRISMA methodology. Thus, we include journal and conference papers that provide essential insights into key parameters required for dataset development, involving emotion modeling (categorical or dimensional), the type of speech data (natural, acted, or elicited), the most common modalities integrated with acoustic and linguistic data from speech and the technologies used. Similarly, following this methodology, we identified the key representative features that serve as critical emotional information sources in both modalities. For acoustic, this included those extracted from the time and frequency domains, while for linguistic, earlier embeddings and the most common transformer models were considered. In addition, Deep Learning (DL) and attention-based methods were analyzed for both. Given the importance of effectively combining these diverse features for improving ER, we then explore fusion techniques based on the level of abstraction. Specifically, we focus on traditional approaches, including feature-, decision-, DL-, and attention-based fusion methods. Next, we provide a comparative analysis to assess the performance of the approaches included in our study. Our findings indicate that for the most commonly used datasets in the literature: IEMOCAP and MELD, the integration of acoustic and linguistic features reached a weighted accuracy (WA) of 85.71% and 63.80%, respectively. Finally, we discuss the main challenges and propose future guidelines that could enhance the performance of ER systems using acoustic and linguistic features from speech.es
dc.format.mimetypeapplication/pdfes
dc.language.isoenges
dc.publisherElsevier Ltd.es
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subject.classificationEmotion recognitiones
dc.subject.classificationSpeeches
dc.subject.classificationLinguistices
dc.subject.classificationAcoustices
dc.subject.classificationFusiones
dc.subject.classificationDeep learninges
dc.subject.classificationMachine learninges
dc.subject.classificationLow and high-level featureses
dc.titleDeep feature representations and fusion strategies for speech emotion recognition from acoustic and linguistic modalities: A systematic reviewes
dc.typeinfo:eu-repo/semantics/articlees
dc.identifier.doi10.1016/j.csl.2025.101873es
dc.relation.publisherversionhttps://www.sciencedirect.com/science/article/pii/S0885230825000981es
dc.identifier.publicationfirstpage101873es
dc.identifier.publicationtitleComputer Speech & Languagees
dc.identifier.publicationvolume96es
dc.peerreviewedSIes
dc.description.projectProyecto FrailAlert SBPLY/21/180501/000216 cofinanciado por la Junta de Comunidades de Castilla-La Mancha y la Unión Europea a través del Fondo Europeo de Desarrollo Regionales
dc.description.projectActiTracker TED2021-130867B-I00 financiado por MCIN/AEI/10.13039/501100011033 y por European Union NextGenerationEU/PRTRes
dc.description.projectINDRI (PID2021-122642OB-C41 /AEI/10.13039/501100011033/ FEDER, UE)es
dc.description.projectMinisterio de Ciencia e Innovación bajo el proyecto PID2023-146254OB-C41es
dc.rightsAtribución 4.0 Internacional*
dc.rightsAtribución 4.0 Internacional*
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem