Por favor, use este identificador para citar o enlazar este ítem:https://uvadoc.uva.es/handle/10324/74130
Título
Deep learning for obstructive sleep apnea diagnosis based on single channel oximetry
Año del Documento
2023
Editorial
SPRINGER NATURE
Descripción
Producción Científica
Documento Fuente
Nature Communications, 2023, vol. 14, p. 4881
Abstract
Obstructive sleep apnea (OSA) is a serious medical condition with a high prevalence, although diagnosis remains a challenge. Existing home sleep tests may provide acceptable diagnosis performance but have shown several limitations. In this retrospective study, we used 12,923 polysomnography recordings from six independent databases to develop and evaluate a deep learning model, calledOxiNet, for the estimation of the apnea-hypopnea index from the oximetry signal. We evaluated OxiNet performance across ethnicity, age, sex, and comorbidity. OxiNet missed 0.2% of all test set moderate-tosevere OSA patients against 21% for the best benchmark.
ISSN
2041-1723
Revisión por pares
SI
Patrocinador
J.A.B. and J.L. acknowledge the financial support of Israel PBC-VATAT and by the Technion Center for Machine Learning and Intelligent Systems (MLIS). D.Á. is supported by a “Ramón y Cajal” grant (RYC2019-028566-I) from the “Ministerio de Ciencia e Innovación - Agencia Estatal de Investigación” co-funded by the European Social Fund and in part by Sociedad Española de Neumología y Cirugía Torácica (SEPAR) under project 649/2018 and by Sociedad Española de Sueño (SES) under the project “Beca de Investigación SES 2019. In addition, D.Á. has been partially supported by “CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBERBBN)” through “Instituto de Salud Carlos III” co-funded with FEDER funds.
Version del Editor
Propietario de los Derechos
Levy L et al.
Idioma
eng
Tipo de versión
info:eu-repo/semantics/publishedVersion
Derechos
openAccess
Aparece en las colecciones
Files in questo item
Tamaño:
3.336Mb
Formato:
Adobe PDF
Descripción:
Published version (Open Access)
La licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional