Mostrar el registro sencillo del ítem
dc.contributor.author | Stuermer, Leandro | |
dc.contributor.author | Braga Vieira, Sabrina | |
dc.contributor.author | Martín Herranz, Raúl | |
dc.date.accessioned | 2025-02-26T13:36:20Z | |
dc.date.available | 2025-02-26T13:36:20Z | |
dc.date.issued | 2024 | |
dc.identifier.citation | Ophthalmic and Physiological Optics, 2025, vol. 45, n. 2, p. 437-449 | es |
dc.identifier.issn | 0275-5408 | es |
dc.identifier.uri | https://uvadoc.uva.es/handle/10324/75141 | |
dc.description | Producción Científica | es |
dc.description.abstract | Purpose: To propose a novel artificial intelligence (AI)-based virtual assistant trained on tabular clinical data that can provide decision-making support in primary eye care practice and optometry education programmes. Method: Anonymised clinical data from 1125 complete optometric examinations (2250 eyes; 63% women, 37% men) were used to train different machine learning algorithm models to predict eye examination classification (refractive, binocular vision dysfunction, ocular disorder or any combination of these three options). After modelling, adjustment, mining and preprocessing (one-hot encoding and SMOTE techniques), 75 input (preliminary data, history, oculomotor test and ocular examinations) and three output (refractive, binocular vision status and eye disease) features were defined. The data were split into training (80%) and test (20%) sets. Five machine learning algorithms were trained, and the best algorithms were subjected to fivefold cross-validation. Model performance was evaluated for accuracy, precision, sensitivity, F1 score and specificity. Results: The random forest algorithm was the best for classifying eye examination results with a performance >95.2% (based on 35 input features from preliminary data and history), to propose a subclassification of ocular disorders with a performance >98.1% (based on 65 features from preliminary data, history and ocular examinations) and to differentiate binocular vision dysfunctions with a performance >99.7% (based on 30 features from preliminary data and oculomotor tests). These models were integrated into a responsive web application, available in three languages, allowing intuitive access to the AI models via conventional clinical terms. Conclusions: An AI-based virtual assistant that performed well in predicting patient classification, eye disorders or binocular vision dysfunction has been developed with potential use in primary eye care practice and education programmes. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | eng | es |
dc.publisher | Wiley | es |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject.classification | artificial intelligence | es |
dc.subject.classification | clinical decision support | es |
dc.subject.classification | machine learning | es |
dc.subject.classification | optometry | es |
dc.subject.classification | virtual assistant | es |
dc.title | Artificial intelligence virtual assistants in primary eye care practice | es |
dc.type | info:eu-repo/semantics/article | es |
dc.rights.holder | © 2024 The Author(s) | es |
dc.identifier.doi | 10.1111/opo.13435 | es |
dc.relation.publisherversion | https://onlinelibrary.wiley.com/doi/10.1111/opo.13435 | es |
dc.identifier.publicationfirstpage | 437 | es |
dc.identifier.publicationissue | 2 | es |
dc.identifier.publicationlastpage | 449 | es |
dc.identifier.publicationtitle | Ophthalmic and Physiological Optics | es |
dc.identifier.publicationvolume | 45 | es |
dc.peerreviewed | SI | es |
dc.identifier.essn | 1475-1313 | es |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |
dc.type.hasVersion | info:eu-repo/semantics/publishedVersion | es |
dc.subject.unesco | 2209.15 Optometría | es |
dc.subject.unesco | 1203.04 Inteligencia Artificial | es |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
