DTA

Archivio Digitale delle Tesi e degli elaborati finali elettronici

 

Tesi etd-12182023-124620

Tipo di tesi
Dottorato
Autore
BRAU, FABIO
URN
etd-12182023-124620
Titolo
Methods for Certifiable Robustness of Deep Neural Networks
Settore scientifico disciplinare
ING-INF/05
Corso di studi
Istituto di Tecnologie della Comunicazione, dell'Informazione e della Percezione - PHD IN EMERGING DIGITAL TECHNOLOGIES
Commissione
relatore Prof. BUTTAZZO, GIORGIO CARLO
Tutor Prof. BIONDI, ALESSANDRO
Presidente Prof. CUCINOTTA, TOMMASO
Membro Prof. COCOCCIONI, MARCO
Membro Prof. ROLI, FABIO
Membro YOUCHENG SUN
Parole chiave
  • robust neural networks
  • trustworthy-AI
  • adversarial examples
Data inizio appello
15/03/2024;
Disponibilità
parziale
Riassunto analitico
This Ph.D. thesis delves into the critical issue of enhancing formal and certifiable guarantees for the robustness of deep neural networks against input perturbations. It explores novel methodologies across five comprehensive chapters. Firstly, the thesis contributes to ongoing efforts to improve the certifiable robustness of deep neural networks. It introduces the problem of online estimation of minimal adversarial perturbation (MAP), focusing on the problem from a geometrical perspective. Secondly, the thesis transitions into achieving certifiable robust models through Lipschitz bounded neural networks. Finally, in pursuit of the same goal, a novel family of classifiers (SDC) is proposed and compared with the related family of Lipschitz bounded models.

In detail, the thesis proposes two root-finding-based strategies for estimating the MAP and provides theoretical results on the goodness of the estimation. Such a theoretical finding can be leveraged for an online estimation of the robustness of a classifier in a given input close enough to the classification boundary. Indeed, the approximate MAP, obtained with the proposed approaches, results in being less computationally expensive than the one obtained with the state-of-the-art methods, enabling a fast estimation of the robustness of a classifier for the sample. Furthermore, the goodness of the estimation is linked to a model-dependent value, named boundary proximity index, which encapsulates the regularity of the decision boundary. Subsequently, the thesis addresses the challenge of designing 1-Lipschitz neural networks, which represent a tangible and effective method for developing certifiably robust classifiers. Therefore, this work includes an extensive comparison of the current state-of-the-art methods for designing 1-Lipschitz DNNs. It goes further to offer practical suggestions and guidelines for the usage of 1-Lipschitz layers, enhancing their effectiveness for the deployment of these models in safety-critical systems. Finally, a new family of classifiers, named signed distance classifiers (SDCs), is discussed. A signed distance classifier provides as output not only the prediction of the class of a given input $x$ but also the distance of $x$ from the classification boundary. We provide a theoretical characterization of the SDCs and propose a tailored network architecture, named UGNN, which, to the best of our knowledge, represents the first practical approximation of an SDC.\medskip

In conclusion, the thesis provides a comprehensive overview of three main directions for achieving certifiable robustness for deep neural networks, by representing a modest yet significant stride towards the application of deep neural networks in safety-critical systems.
File