Tesi etd-12182023-123627
Link copiato negli appunti
Tipo di tesi
Dottorato
Autore
ROSSOLINI, GIULIO
URN
etd-12182023-123627
Titolo
Towards Trustworthy AI: Understanding the Impact of AI Threats and Countermeasures
Settore scientifico disciplinare
ING-INF/05
Corso di studi
Istituto di Tecnologie della Comunicazione, dell'Informazione e della Percezione - PHD IN EMERGING DIGITAL TECHNOLOGIES
Commissione
relatore Prof. BUTTAZZO, GIORGIO CARLO
Tutor Prof. BIONDI, ALESSANDRO
Presidente Prof. CUCINOTTA, TOMMASO
Membro Prof. COCOCCIONI, MARCO
Membro Dott.ssa MAURA PINTOR
Tutor Prof. BIONDI, ALESSANDRO
Presidente Prof. CUCINOTTA, TOMMASO
Membro Prof. COCOCCIONI, MARCO
Membro Dott.ssa MAURA PINTOR
Parole chiave
- DNN Testing
- Adversarial Attacks
- Adversarial Defenses
- Real-World Robustness
Data inizio appello
15/03/2024;
Disponibilità
parziale
Riassunto analitico
The rapid advancements in AI, particularly in deep neural networks (DNNs), have prompted the research community to face complex safety and security challenges, which must be carefully addressed to ensure the correct integration of AI algorithms into human-centric systems.
AI threats can range from intentionally-crafted samples, such as adversarial perturbations or real-world adversarial objects, to unexpected out-of-distribution samples. The presence of these threats raises numerous questions and considerations about the security vulnerabilities and safety requirements of the models and applications under analysis. Accordingly, it is crucial to thoroughly understand and design testing methodologies and mitigation strategies, taking into account specific aspects and requirements of each application scenario.
This thesis delves into the domain of AI threats and countermeasures, with a specific focus on computer vision applications in safety-critical environments like cyber-physical systems, autonomous robots, and self-driving cars.
The main research areas explored in the thesis, within the context of trustworthy AI, include DNN testing and the design of novel real-world attacks and defenses in complex outdoor scenarios.
Firstly, the thesis critically examines the landscape of DNN testing, with a particular focus on the application of coverage criteria, a concept adapted from the field of software engineering. In this context, we introduce a framework designed to utilize coverage criteria for monitoring the behavior of neural networks at run-time. This offers a novel methodological perspective on leveraging testing techniques to assess model behavior. Through an analysis of state-of-the-art approaches in coverage testing and the results obtained, the thesis dedicates significant effort to paving the way for future research directions.
Then, in the realm of real-world adversarial attacks, the thesis reviews the literature on attack and defense strategies, highlighting the gaps in analysis for certain computer vision tasks and applications. Additionally, concerning the understanding of state-of-the-art defense mechanisms, the review underscores an insufficient awareness of the practical implications of current defense mechanisms when applied in safety-critical scenarios.
Following these observations, the work focuses on developing real-world attacks against semantic segmentation tasks, providing a clear interpretation of the spatial robustness of DNNs.
The proposed attack methodology is achieved through novel optimization approaches and the utilization of driving simulators. Subsequently, the thesis presents an in-depth study of novel interpretable defense mechanisms, founded on provable and robust analysis.
In conclusion, this thesis offers an examination of AI threats from different perspectives, merging theoretical discussions with practical applications. It aims at expanding and reviewing the existing literature, stimulating further research, and enhancing the understanding of AI safety and security threats.
AI threats can range from intentionally-crafted samples, such as adversarial perturbations or real-world adversarial objects, to unexpected out-of-distribution samples. The presence of these threats raises numerous questions and considerations about the security vulnerabilities and safety requirements of the models and applications under analysis. Accordingly, it is crucial to thoroughly understand and design testing methodologies and mitigation strategies, taking into account specific aspects and requirements of each application scenario.
This thesis delves into the domain of AI threats and countermeasures, with a specific focus on computer vision applications in safety-critical environments like cyber-physical systems, autonomous robots, and self-driving cars.
The main research areas explored in the thesis, within the context of trustworthy AI, include DNN testing and the design of novel real-world attacks and defenses in complex outdoor scenarios.
Firstly, the thesis critically examines the landscape of DNN testing, with a particular focus on the application of coverage criteria, a concept adapted from the field of software engineering. In this context, we introduce a framework designed to utilize coverage criteria for monitoring the behavior of neural networks at run-time. This offers a novel methodological perspective on leveraging testing techniques to assess model behavior. Through an analysis of state-of-the-art approaches in coverage testing and the results obtained, the thesis dedicates significant effort to paving the way for future research directions.
Then, in the realm of real-world adversarial attacks, the thesis reviews the literature on attack and defense strategies, highlighting the gaps in analysis for certain computer vision tasks and applications. Additionally, concerning the understanding of state-of-the-art defense mechanisms, the review underscores an insufficient awareness of the practical implications of current defense mechanisms when applied in safety-critical scenarios.
Following these observations, the work focuses on developing real-world attacks against semantic segmentation tasks, providing a clear interpretation of the spatial robustness of DNNs.
The proposed attack methodology is achieved through novel optimization approaches and the utilization of driving simulators. Subsequently, the thesis presents an in-depth study of novel interpretable defense mechanisms, founded on provable and robust analysis.
In conclusion, this thesis offers an examination of AI threats from different perspectives, merging theoretical discussions with practical applications. It aims at expanding and reviewing the existing literature, stimulating further research, and enhancing the understanding of AI safety and security threats.
File
Nome file | Dimensione |
---|---|
Ci sono 1 file riservati su richiesta dell'autore. |