Por favor, use este identificador para citar o enlazar este ítem:https://uvadoc.uva.es/handle/10324/72899
Título
Unraveling motor imagery brain patterns using explainable artificial intelligence based on Shapley values
Autor
Año del Documento
2024
Editorial
Elsevier
Descripción
Producción Científica
Documento Fuente
Computer Methods and Programs in Biomedicine, 2024, vol. 246, 108048
Resumen
Background and objective. Motor imagery (MI) based brain-computer interfaces (BCIs) are widely used in rehabilitation due to the close relationship that exists between MI and motor execution (ME). However, the underlying brain mechanisms of MI remain not well understood. Most MI-BCIs use the sensorimotor rhythms elicited in the primary motor cortex (M1) and somatosensory cortex (S1), which consist of an event-related desynchronization followed by an event-related synchronization. Consequently, this has resulted in systems that only record signals around M1 and S1. However, MI could involve a more complex network including sensory, association, and motor areas. In this study, we hypothesize that the superior accuracies achieved by new deep learning (DL) models applied to MI decoding rely on focusing on a broader MI activation of the brain. Parallel to the success of DL, the field of explainable artificial intelligence (XAI) has seen continuous development to provide explanations for DL networks success. The goal of this study is to use XAI in combination with DL to extract information about MI brain activation patterns from non-invasive electroencephalography (EEG) signals. Methods. We applied an adaptation of Shapley additive explanations (SHAP) to EEGSym, a state-of-the-art DL network with exceptional transfer learning capabilities for inter-subject MI classification. We obtained the SHAP values from two public databases comprising 171 users generating left and right hand MI instances with and without real-time feedback. Results. We found that EEGSym based most of its prediction on the signal of the frontal electrodes, i.e. F7 and F8, and on the first 1500 ms of the analyzed imagination period. We also found that MI involves a broad network not only based on M1 and S1, but also on the prefrontal cortex (PFC) and the posterior parietal cortex (PPC). We further applied this knowledge to select a 8-electrode configuration that reached inter-subject accuracies of 86.5% ± 10.6% on the Physionet dataset and 88.7% ± 7.0% on the Carnegie Mellon University's dataset. Conclusion. Our results demonstrate the potential of combining DL and SHAP-based XAI to unravel the brain network involved in producing MI. Furthermore, SHAP values can optimize the requirements for out-of-laboratory BCI applications involving real users.
Palabras Clave
Brain-computer interface (BCI)
Motor imagery (MI)
Explainable artificial intelligence (XAI)
Shapley additive explanations (SHAP)
Deep learning (DL)
Sensorimotor rhythms (SMR)
ISSN
0169-2607
Revisión por pares
SI
Patrocinador
Ministerio de Ciencia e Innovación/FEDER (PDC2021-120775-I00, TED2021-129915B-I00, RTC2019-007350-1, PID2020-115468RB-I00)
Comisión Europea/FEDER (EUROAGE+)
Instituto de Salud Carlos III/FEDER (CIBER-BBN)
Junta de Castilla y León-Consejería de Educación
Comisión Europea/FEDER (EUROAGE+)
Instituto de Salud Carlos III/FEDER (CIBER-BBN)
Junta de Castilla y León-Consejería de Educación
Version del Editor
Propietario de los Derechos
© 2024 The Author(s)
Idioma
eng
Tipo de versión
info:eu-repo/semantics/publishedVersion
Derechos
openAccess
Aparece en las colecciones
Ficheros en el ítem
Nombre:
Tamaño:
1.257Mb
Formato:
Adobe PDF
La licencia del ítem se describe como Atribución-NoComercial 4.0 Internacional