Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills
  1. Outputs

Methods, models and tools for improving the quality of textual annotations

Academic Article
Publication Date:
2022
abstract:
In multilingual textual archives, the availability of textual annotation, that is keywords either manually or automatically associated with texts, is something worth exploiting to improve user experience and successful navigation, search and visualization. It is therefore necessary to study and develop tools for this exploitation. The paper aims to define models and tools for handling textual annotations, in our case keywords of a scientific library. With the background of NLP, machine learning and deep learning approaches are presented. They allow us, in supervised and unsupervised ways, to increase the quality of keywords. The different steps of the pipeline are addressed, and different solutions are analyzed, implemented, evaluated and compared, using statistical methods, machine learning and artificial neural networks as appropriate. If possible, off-the-shelf solutions will also be compared. The models are trained on different datasets already available or created ad hoc with common characteristics with the starting dataset. The results obtained are presented, commented and compared with each other.
Iris type:
01.01 Articolo in rivista
Keywords:
multilingual datasets; word embedding models; sequence2sequence; LSTM; neural networks; machine learning algorithms; semantic relatedness; syntactic similarity
List of contributors:
Gagliardi, Isabella; Artese, MARIA TERESA
Authors of the University:
ARTESE MARIA TERESA
GAGLIARDI ISABELLA
Handle:
https://iris.cnr.it/handle/20.500.14243/456759
Published in:
MODELLING
Journal
  • Overview

Overview

URL

https://www.mdpi.com/2673-3951/3/2/15
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)