Analyzing Forward Robustness of Feedforward Deep Neural Networks with LeakyReLU Activation Function Through Symbolic Propagation
Conference Paper
Publication Date:
2020
abstract:
FeedForward Deep Neural Networks (DNNs) robustness is a relevant property to study, since it allows to establish whether the classification performed by DNNs is vulnerable to small perturbations in the provided input, and several verification approaches have been developed to assess such robustness degree. Recently, an approach has been introduced to evaluate forward robustness, based on symbolic computations and designed for the ReLU activation function. In this paper, a generalization of such a symbolic approach for the widely adopted LeakyReLU activation function is developed. A preliminary numerical campaign, briefly discussed in the paper, shows interesting results.
Iris type:
04.01 Contributo in Atti di convegno
Keywords:
Deep Neural Network; LeakyReLU; Robustness
List of contributors:
Masetti, Giulio; DI GIANDOMENICO, Felicita
Full Text:
Book title:
ECML PKDD 2020 Workshops
Published in: