Data di Pubblicazione:
2022
Abstract:
We propose a semi-supervised learning strategy for deep Convolutional Neural Networks (CNNs) in which an unsupervised pre-training stage, performed using biologically inspired Hebbian learning algorithms, is followed by supervised end-to-end backprop fine-tuning. We explored two Hebbian learning rules for the unsupervised pre-training stage: soft-Winner-Takes-All (soft-WTA) and nonlinear Hebbian Principal Component Analysis (HPCA). Our approach was applied in sample efficiency scenarios, where the amount of available labeled training samples is very limited, and unsupervised pre-training is therefore beneficial. We performed experiments on CIFAR10, CIFAR100, and Tiny ImageNet datasets. Our results show that Hebbian outperforms Variational Auto-Encoder (VAE) pre-training in almost all the cases, with HPCA generally performing better than soft-WTA.
Tipologia CRIS:
02.01 Contributo in volume (Capitolo o Saggio)
Keywords:
Hebbian learning; Deep learning; Semi-supervised; Sample efficiency; Neural networks; Bio-inspired
Elenco autori:
Amato, Giuseppe; Gennaro, Claudio; Falchi, Fabrizio
Link alla scheda completa:
Link al Full Text:
Titolo del libro:
Machine Learning, Optimization, and Data Science