• español
  • English
  • français
  • Deutsch
  • português (Brasil)
  • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Navegar

    Todo o repositórioComunidadesPor data do documentoAutoresAssuntosTítulos

    Minha conta

    Entrar

    Estatística

    Ver as estatísticas de uso

    Compartir

    Ver item 
    •   Página inicial
    • TRABALHO DE CONCLUSÃO DE ESTUDO
    • Trabajos Fin de Máster UVa
    • Ver item
    •   Página inicial
    • TRABALHO DE CONCLUSÃO DE ESTUDO
    • Trabajos Fin de Máster UVa
    • Ver item
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano

    Exportar

    RISMendeleyRefworksZotero
    • edm
    • marc
    • xoai
    • qdc
    • ore
    • ese
    • dim
    • uketd_dc
    • oai_dc
    • etdms
    • rdf
    • mods
    • mets
    • didl
    • premis

    Citas

    Por favor, use este identificador para citar o enlazar este ítem:http://uvadoc.uva.es/handle/10324/44458

    Título
    The Blind Oracle, eXplainable Artififical Intelligence (XAI) and human agency
    Autor
    Fernández Álvarez, Raúl
    Director o Tutor
    Díaz Gómez, FernandoAutoridad UVA
    Editor
    Universidad de Valladolid. Escuela de Ingeniería Informática de ValladolidAutoridad UVA
    Año del Documento
    2020
    Titulación
    Máster en Ingeniería Informática
    Resumo
    An explainable machine learning model is a requirement for trust. Without it the human operator cannot form a correct mental model and will distrust and reject the machine learning model. Nobody will ever trust a system which exhibit an apparent erratic behaviour. The development of eXplainable AI (XAI) techniques try to uncover how a model works internally and the reasons why they make some predictions and not others. But the ultimate objective is to use these techniques to guide the training and deployment of fair automated decision systems that support human agency and are beneficial to humanity. In addition, automated decision systems based on Machine Learning models are being used for an increasingly number of purposes. However, the use of black-box models and massive quantities of data to train them make the deployed models inscrutable. Consequently, predictions made by systems integrating these models might provoke rejection by their users when they made seemingly arbitrary predictions. Moreover, the risk is compounded by the use of models in high-risk environments or in situations when the predictions might have serious consequences.
    Palabras Clave
    XAI
    Human-caused fires
    Machine learning
    Departamento
    Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos)
    Idioma
    eng
    URI
    http://uvadoc.uva.es/handle/10324/44458
    Derechos
    openAccess
    Aparece en las colecciones
    • Trabajos Fin de Máster UVa [7002]
    Mostrar registro completo
    Arquivos deste item
    Nombre:
    TFM-G1311.pdf
    Tamaño:
    72.57Mb
    Formato:
    Adobe PDF
    Thumbnail
    Visualizar/Abrir
    Attribution-NonCommercial-NoDerivatives 4.0 InternacionalExceto quando indicado o contrário, a licença deste item é descrito como Attribution-NonCommercial-NoDerivatives 4.0 Internacional

    Universidad de Valladolid

    Powered by MIT's. DSpace software, Version 5.10