dc.contributor.advisor | Díaz Gómez, Fernando | es |
dc.contributor.author | Fernández Álvarez, Raúl | |
dc.contributor.editor | Universidad de Valladolid. Escuela de Ingeniería Informática de Valladolid | es |
dc.date.accessioned | 2020-12-18T16:48:03Z | |
dc.date.available | 2020-12-18T16:48:03Z | |
dc.date.issued | 2020 | |
dc.identifier.uri | http://uvadoc.uva.es/handle/10324/44458 | |
dc.description.abstract | An explainable machine learning model is a requirement for trust. Without
it the human operator cannot form a correct mental model and will distrust
and reject the machine learning model. Nobody will ever trust a system
which exhibit an apparent erratic behaviour.
The development of eXplainable AI (XAI) techniques try to uncover how
a model works internally and the reasons why they make some predictions
and not others. But the ultimate objective is to use these techniques to
guide the training and deployment of fair automated decision systems that
support human agency and are beneficial to humanity.
In addition, automated decision systems based on Machine Learning
models are being used for an increasingly number of purposes. However,
the use of black-box models and massive quantities of data to train them
make the deployed models inscrutable. Consequently, predictions made by
systems integrating these models might provoke rejection by their users
when they made seemingly arbitrary predictions. Moreover, the risk is compounded
by the use of models in high-risk environments or in situations
when the predictions might have serious consequences. | es |
dc.description.sponsorship | Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos) | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | eng | es |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject.classification | XAI | es |
dc.subject.classification | Human-caused fires | es |
dc.subject.classification | Machine learning | es |
dc.title | The Blind Oracle, eXplainable Artififical Intelligence (XAI) and human agency | es |
dc.type | info:eu-repo/semantics/masterThesis | es |
dc.description.degree | Máster en Ingeniería Informática | es |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |