• español
  • English
  • français
  • Deutsch
  • português (Brasil)
  • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Parcourir

    Tout UVaDOCCommunautésPar date de publicationAuteursSujetsTitres

    Mon compte

    Ouvrir une session

    Statistiques

    Statistiques d'usage de visualisation

    Compartir

    Voir le document 
    •   Accueil de UVaDOC
    • PROJET DE FIN D'ÉTUDES
    • Trabajos Fin de Máster UVa
    • Voir le document
    •   Accueil de UVaDOC
    • PROJET DE FIN D'ÉTUDES
    • Trabajos Fin de Máster UVa
    • Voir le document
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano

    Exportar

    RISMendeleyRefworksZotero
    • edm
    • marc
    • xoai
    • qdc
    • ore
    • ese
    • dim
    • uketd_dc
    • oai_dc
    • etdms
    • rdf
    • mods
    • mets
    • didl
    • premis

    Citas

    Por favor, use este identificador para citar o enlazar este ítem:http://uvadoc.uva.es/handle/10324/44458

    Título
    The Blind Oracle, eXplainable Artififical Intelligence (XAI) and human agency
    Autor
    Fernández Álvarez, Raúl
    Director o Tutor
    Díaz Gómez, FernandoAutoridad UVA
    Editor
    Universidad de Valladolid. Escuela de Ingeniería Informática de ValladolidAutoridad UVA
    Año del Documento
    2020
    Titulación
    Máster en Ingeniería Informática
    Résumé
    An explainable machine learning model is a requirement for trust. Without it the human operator cannot form a correct mental model and will distrust and reject the machine learning model. Nobody will ever trust a system which exhibit an apparent erratic behaviour. The development of eXplainable AI (XAI) techniques try to uncover how a model works internally and the reasons why they make some predictions and not others. But the ultimate objective is to use these techniques to guide the training and deployment of fair automated decision systems that support human agency and are beneficial to humanity. In addition, automated decision systems based on Machine Learning models are being used for an increasingly number of purposes. However, the use of black-box models and massive quantities of data to train them make the deployed models inscrutable. Consequently, predictions made by systems integrating these models might provoke rejection by their users when they made seemingly arbitrary predictions. Moreover, the risk is compounded by the use of models in high-risk environments or in situations when the predictions might have serious consequences.
    Palabras Clave
    XAI
    Human-caused fires
    Machine learning
    Departamento
    Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos)
    Idioma
    eng
    URI
    http://uvadoc.uva.es/handle/10324/44458
    Derechos
    openAccess
    Aparece en las colecciones
    • Trabajos Fin de Máster UVa [7002]
    Afficher la notice complète
    Fichier(s) constituant ce document
    Nombre:
    TFM-G1311.pdf
    Tamaño:
    72.57Mo
    Formato:
    Adobe PDF
    Thumbnail
    Voir/Ouvrir
    Attribution-NonCommercial-NoDerivatives 4.0 InternacionalExcepté là où spécifié autrement, la license de ce document est décrite en tant que Attribution-NonCommercial-NoDerivatives 4.0 Internacional

    Universidad de Valladolid

    Powered by MIT's. DSpace software, Version 5.10