Data di Pubblicazione:
2023
Abstract:
The capability to select the relevant portion of the input is a key feature to limit the sensory input and focus on the most informative collected part. The transformer architecture is among the most performing deep neural network architectures due to the attention mechanism. The attention allows us to spot relevant connections between portions of the images and highlight these connections. Since the model is complex, it is not easy to determine which are these connections and the important areas. We discuss a technique to show these areas and highlight the regions most relevant for label attribution.
Tipologia CRIS:
04.01 Contributo in Atti di convegno
Keywords:
Deep Neural Networks; Transformers; XAI; Attention
Elenco autori:
Vella, Filippo; Rizzo, Riccardo
Link alla scheda completa:
Pubblicato in: