Mostrar registro simples

dc.contributor.authorBarroso-García, Verónica
dc.contributor.authorVaquerizo-Villar, Fernando
dc.contributor.authorGutiérrez-Tobal, Gonzalo C.
dc.contributor.authorDayyat, Ehab
dc.contributor.authorGozal, David
dc.contributor.authorLeppänen, Timo
dc.contributor.authorHornero, Roberto
dc.date.accessioned2025-12-04T12:00:11Z
dc.date.available2025-12-04T12:00:11Z
dc.date.issued2025-10-24
dc.identifier.citationIEEE Journal of Translational Engineering in Health and Medicine, Octubre 2025, vol. 13, 517-531es
dc.identifier.issn2168-2372es
dc.identifier.urihttps://uvadoc.uva.es/handle/10324/80306
dc.descriptionProducción Científicaes
dc.description.abstractObjective: Approaches based on a single-channel airflow has shown great potential for simplifying pediatric obstructive sleep apnea (OSA) diagnosis. However, analysis has been limited to feature-engineering techniques, restricting identification of complex respiratory patterns, and reducing diagnostic performance in automated models. Here, we propose deep-learning and explainable artificial intelligence (XAI) to estimate the pediatric OSA severity from airflow, while ensuring transparency in automatic decisions. Technology or Method: We used 3,672 overnight airflow recordings from four pediatric datasets. A convolutional neural network (CNN)-based regression model was trained to estimate the apnea-hypopnea index (AHI) and predict OSA severity. We evaluated and compared Gradient-Weighted Class Activation Mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP) to identify the airflow regions where the CNN focuses for predictions. Results: The proposed model demonstrated high concordance between the actual and estimated AHI (intraclass correlation coefficient from 0.69 to 0.87 in the test group), and high diagnostic performance: four-class Cohen’s kappa between 0.37 and 0.43 and accuracies of 82.03%, 97.09%, and 99.03% for three OSA severity cutoffs (i.e. 1, 5, and 10 e/h) in the test group. The interpretability analysis with Grad-CAM and SHAP revealed that the CNN accurately identifies apneic events by focusing on their onset and offset. Both techniques provided complementary information about the model’s decision-making. While Grad-CAM highlighted respiratory events with abrupt signal changes, SHAP captured more subtle patterns with noise included. Conclusions: Accordingly, our model can help automatically detect pediatric OSA and offers clinicians an explainable approach that enhances credibility and usability, thus providing a path toward clinical translation in early diagnosis. Clinical Impact: This study presents an interpretable deep-learning tool using airflow to accurately detect pediatric obstructive sleep apnea, enabling early, objective diagnosis and supporting clinical decision-making through identification of relevant respiratory patterns.es
dc.format.mimetypeapplication/pdfes
dc.language.isospaes
dc.publisherIEEE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS INCes
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.subject.classificationAirflowes
dc.subject.classificationchildrenes
dc.subject.classificationconvolutional neural network (CNN)es
dc.subject.classificationdeep-learning (DL)es
dc.subject.classificationexplainable artificial intelligence (XAI)es
dc.subject.classificationobstructive sleep apnea (OSA)es
dc.titleAn Explainable Deep-Learning Approach to Detect Pediatric Sleep Apnea From Single-Channel Airflowes
dc.typeinfo:eu-repo/semantics/articlees
dc.rights.holder© 2025 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.es
dc.identifier.doi10.1109/JTEHM.2025.3625388es
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/11216356es
dc.identifier.publicationfirstpage517es
dc.identifier.publicationlastpage531es
dc.identifier.publicationtitleIEEE Journal of Translational Engineering in Health and Medicinees
dc.identifier.publicationvolume13es
dc.peerreviewedSIes
dc.description.projectThis work is part of the projects PID2023-148895OB-I00 and CPP2022-009735, funded by MICIU/AEI/10.13039/501100011033, the FSE+, and the European Union ‘‘NextGenerationEU’’/PRTR. This research was also co-funded by the European Union through the Interreg VI-A Spain-Portugal Program (POCTEP) 2021-2027 (0043_NET4SLEEP_2_E), and by ‘‘CIBER—Consorcio Centro de Investigación Biomédica en Red’’ (CB19/01/00012) through ‘‘Instituto de Salud Carlos III (ISCIII)’’, co-funded with European Regional Development Fund. D. Gozal was supported by ‘‘National Institutes of Health (NIH)’’ grant HL166617. T. Leppänen was supported by research funding from the State Research Funding for university-level health research, Kuopio University Hospital, Wellbeing Service County of North Savo (projects 5041820) and the Research Council of Finland (361199).es
dc.identifier.essn2168-2372es
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones
dc.subject.unesco1203.04 Inteligencia Artificiales
dc.subject.unesco3325 Tecnología de las Telecomunicacioneses
dc.subject.unesco3314 Tecnología Médicaes


Arquivos deste item

Thumbnail

Este item aparece na(s) seguinte(s) coleção(s)

Mostrar registro simples