• español
  • English
  • français
  • Deutsch
  • português (Brasil)
  • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UVaDOCCommunitiesBy Issue DateAuthorsSubjectsTitles

    My Account

    Login

    Statistics

    View Usage Statistics

    Share

    View Item 
    •   UVaDOC Home
    • FINAL DEGREE PROJECTS
    • Trabajos Fin de Máster UVa
    • View Item
    •   UVaDOC Home
    • FINAL DEGREE PROJECTS
    • Trabajos Fin de Máster UVa
    • View Item
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano

    Export

    RISMendeleyRefworksZotero
    • edm
    • marc
    • xoai
    • qdc
    • ore
    • ese
    • dim
    • uketd_dc
    • oai_dc
    • etdms
    • rdf
    • mods
    • mets
    • didl
    • premis

    Citas

    Por favor, use este identificador para citar o enlazar este ítem:http://uvadoc.uva.es/handle/10324/44458

    Título
    The Blind Oracle, eXplainable Artififical Intelligence (XAI) and human agency
    Autor
    Fernández Álvarez, Raúl
    Director o Tutor
    Díaz Gómez, FernandoAutoridad UVA
    Editor
    Universidad de Valladolid. Escuela de Ingeniería Informática de ValladolidAutoridad UVA
    Año del Documento
    2020
    Titulación
    Máster en Ingeniería Informática
    Abstract
    An explainable machine learning model is a requirement for trust. Without it the human operator cannot form a correct mental model and will distrust and reject the machine learning model. Nobody will ever trust a system which exhibit an apparent erratic behaviour. The development of eXplainable AI (XAI) techniques try to uncover how a model works internally and the reasons why they make some predictions and not others. But the ultimate objective is to use these techniques to guide the training and deployment of fair automated decision systems that support human agency and are beneficial to humanity. In addition, automated decision systems based on Machine Learning models are being used for an increasingly number of purposes. However, the use of black-box models and massive quantities of data to train them make the deployed models inscrutable. Consequently, predictions made by systems integrating these models might provoke rejection by their users when they made seemingly arbitrary predictions. Moreover, the risk is compounded by the use of models in high-risk environments or in situations when the predictions might have serious consequences.
    Palabras Clave
    XAI
    Human-caused fires
    Machine learning
    Departamento
    Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos)
    Idioma
    eng
    URI
    http://uvadoc.uva.es/handle/10324/44458
    Derechos
    openAccess
    Collections
    • Trabajos Fin de Máster UVa [7034]
    Show full item record
    Files in this item
    Nombre:
    TFM-G1311.pdf
    Tamaño:
    72.57Mb
    Formato:
    Adobe PDF
    Thumbnail
    FilesOpen
    Attribution-NonCommercial-NoDerivatives 4.0 InternacionalExcept where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 Internacional

    Universidad de Valladolid

    Powered by MIT's. DSpace software, Version 5.10