RT info:eu-repo/semantics/masterThesis T1 The Blind Oracle, eXplainable Artififical Intelligence (XAI) and human agency A1 Fernández Álvarez, Raúl A2 Universidad de Valladolid. Escuela de Ingeniería Informática de Valladolid K1 XAI K1 Human-caused fires K1 Machine learning AB An explainable machine learning model is a requirement for trust. Withoutit the human operator cannot form a correct mental model and will distrustand reject the machine learning model. Nobody will ever trust a systemwhich exhibit an apparent erratic behaviour.The development of eXplainable AI (XAI) techniques try to uncover howa model works internally and the reasons why they make some predictionsand not others. But the ultimate objective is to use these techniques toguide the training and deployment of fair automated decision systems thatsupport human agency and are beneficial to humanity.In addition, automated decision systems based on Machine Learningmodels are being used for an increasingly number of purposes. However,the use of black-box models and massive quantities of data to train themmake the deployed models inscrutable. Consequently, predictions made bysystems integrating these models might provoke rejection by their userswhen they made seemingly arbitrary predictions. Moreover, the risk is compoundedby the use of models in high-risk environments or in situationswhen the predictions might have serious consequences. YR 2020 FD 2020 LK http://uvadoc.uva.es/handle/10324/44458 UL http://uvadoc.uva.es/handle/10324/44458 LA eng NO Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos) DS UVaDOC RD 17-jul-2024