Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze
  1. Pubblicazioni

Fairness auditing, explanation and debiasing in linguistic data and language models

Contributo in Atti di convegno
Data di Pubblicazione:
2023
Abstract:
This research proposal is framed in the interdisciplinary exploration of the socio-cultural implications that AI exerts on individual and groups. The focus concerns contexts where models can amplify discriminations through algorithmic biases, e.g., in recommendation and ranking systems or abusive language detection classifiers, and the debiasing of their automated decisions to become beneficial and just for everyone. To address these issues, the main objective of the proposed research project is to develop a framework to perform fairness auditing and debiasing of both classifiers and datasets, starting with, but not limited to, abusive language detection, thus broadening the approach toward other NLP tasks. Ultimately, by questioning the effectiveness of adjusting and debiasing existing resources, the project aims at developing truly inclusive, fair, and explainable models by design.
Tipologia CRIS:
04.01 Contributo in Atti di convegno
Keywords:
Responsible NLP; Explainability; Interpretability; Fairness
Elenco autori:
MARCHIORI MANERBA, Marta
Link alla scheda completa:
https://iris.cnr.it/handle/20.500.14243/452076
Link al Full Text:
https://iris.cnr.it//retrieve/handle/20.500.14243/452076/136537/prod_490206-doc_204217.pdf
Titolo del libro:
xAI-2023 - LB-D-DC xAI-2023 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings
Pubblicato in:
CEUR WORKSHOP PROCEEDINGS
Series
  • Dati Generali

Dati Generali

URL

https://ceur-ws.org/Vol-3554/
  • Utilizzo dei cookie

Realizzato con VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)