Mostrar el registro sencillo del ítem
dc.contributor.author | Romero Oraa, Roberto | |
dc.contributor.author | Herrero Tudela, María | |
dc.contributor.author | López Gálvez, María Isabel | |
dc.contributor.author | Hornero Sánchez, Roberto | |
dc.contributor.author | García Gadañón, María | |
dc.date.accessioned | 2024-12-20T11:44:07Z | |
dc.date.available | 2024-12-20T11:44:07Z | |
dc.date.issued | 2024 | |
dc.identifier.citation | Computer Methods and Programs in Biomedicine, 2024, vol. 249, 108160 | es |
dc.identifier.issn | 0169-2607 | es |
dc.identifier.uri | https://uvadoc.uva.es/handle/10324/72951 | |
dc.description | Producción Científica | es |
dc.description.abstract | Background and objective: Early detection and grading of Diabetic Retinopathy (DR) is essential to determine an adequate treatment and prevent severe vision loss. However, the manual analysis of fundus images is time consuming and DR screening programs are challenged by the availability of human graders. Current automatic approaches for DR grading attempt the joint detection of all signs at the same time. However, the classification can be optimized if red lesions and bright lesions are independently processed since the task gets divided and simplified. Furthermore, clinicians would greatly benefit from explainable artificial intelligence (XAI) to support the automatic model predictions, especially when the type of lesion is specified. As a novelty, we propose an end-to-end deep learning framework for automatic DR grading (5 severity degrees) based on separating the attention of the dark structures from the bright structures of the retina. As the main contribution, this approach allowed us to generate independent interpretable attention maps for red lesions, such as microaneurysms and hemorrhages, and bright lesions, such as hard exudates, while using image-level labels only. Methods: Our approach is based on a novel attention mechanism which focuses separately on the dark and the bright structures of the retina by performing a previous image decomposition. This mechanism can be seen as a XAI approach which generates independent attention maps for red lesions and bright lesions. The framework includes an image quality assessment stage and deep learning-related techniques, such as data augmentation, transfer learning and fine-tuning. We used the architecture Xception as a feature extractor and the focal loss function to deal with data imbalance. Results: The Kaggle DR detection dataset was used for method development and validation. The proposed approach achieved 83.7 % accuracy and a Quadratic Weighted Kappa of 0.78 to classify DR among 5 severity degrees, which outperforms several state-of-the-art approaches. Nevertheless, the main result of this work is the generated attention maps, which reveal the pathological regions on the image distinguishing the red lesions and the bright lesions. These maps provide explainability to the model predictions. Conclusions: Our results suggest that our framework is effective to automatically grade DR. The separate attention approach has proven useful for optimizing the classification. On top of that, the obtained attention maps facilitate visual interpretation for clinicians. Therefore, the proposed method could be a diagnostic aid for the early detection and grading of DR. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | eng | es |
dc.publisher | Elsevier | es |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject.classification | Diabetic retinopathy grading | es |
dc.subject.classification | Fundus images | es |
dc.subject.classification | Deep learning | es |
dc.subject.classification | Attention mechanism | es |
dc.subject.classification | Explainable artificial intelligence | es |
dc.title | Attention-based deep learning framework for automatic fundus image processing to aid in diabetic retinopathy grading | es |
dc.type | info:eu-repo/semantics/article | es |
dc.rights.holder | © 2024 The Authors | es |
dc.identifier.doi | 10.1016/j.cmpb.2024.108160 | es |
dc.relation.publisherversion | https://www.sciencedirect.com/science/article/pii/S0169260724001561 | es |
dc.identifier.publicationfirstpage | 108160 | es |
dc.identifier.publicationtitle | Computer Methods and Programs in Biomedicine | es |
dc.identifier.publicationvolume | 249 | es |
dc.peerreviewed | SI | es |
dc.description.project | Ministerio de Ciencia e Innovación (PID2020-115468RB-I00) | es |
dc.description.project | Ministerio de Ciencia e Innovación/AEI/Unión Europea-Next Generation EU (TED2021-131913B-I00) | es |
dc.description.project | Universidad de Valladolid (PIF-UVa) | es |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |
dc.type.hasVersion | info:eu-repo/semantics/publishedVersion | es |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
La licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional