dc.contributor.advisor |
Nassar, Mohamed |
dc.contributor.author |
Hajj Ibrahim, Sara |
dc.date.accessioned |
2021-08-09T16:59:09Z |
dc.date.available |
2021-08-09T16:59:09Z |
dc.date.issued |
8/9/2021 |
dc.date.submitted |
8/9/2021 |
dc.identifier.uri |
http://hdl.handle.net/10938/22937 |
dc.description.abstract |
Deep learning is a type of machine learning that adapts a deep hierarchy of concepts. Deep learning classifiers link the most basic version of concepts at the input layer to the most abstract version of concepts at the output layer, also known as a class or label. However, once trained over a finite set of classes, a deep learning model does not have the power to say that a given input does not belong to any of the classes and simply cannot be linked. Correctly invalidating the prediction of unrelated classes is a challenging problem that has been tackled in many ways in the literature.
Novelty detection gives deep learning the ability to output "do not know" for novel/unseen classes. Still, no attention has been given to the security aspects of novelty detection. In this thesis, we study the case of abstraction-based novelty detection in deep learning in particular. We show that abstraction-based novelty detection is not robust against adversarial attacks.
We formulate three types of attacks against novelty detection: (1) passing a valid sample as invalid, (2) passing an invalid sample as valid, and (3) passing an adversarial sample as valid. We experiment different optimisers for solving our formulated attacks (1 \& 2) on multiple neural network architectures. For attack (3), we show the feasibility of an adversarial sample that fools the deep learning classifier to output a wrong class. We follow existing adversarial attacks by our proposed optimisation attack to bypass the novelty detection monitoring at the same time.
In other words, we show that we can break the security of novelty detection. We call for further research on novelty detection from a defender's point of view. We adapt a suitable defense mechanism against such attacks and assess its performance.
The thesis suggests that more attention could be paid in novelty detection systems to make them more secure against attacks. Especially in critical-decision making systems that are based on artificial intelligence and machine learning, for example self-driving cars, automated medicine or cybersecurity. To our knowledge, our work is the first to address the security limit of novelty detection in deep learning. |
dc.language.iso |
en |
dc.subject |
Machine Learning |
dc.subject |
Deep Learning |
dc.subject |
Novelty Detection |
dc.subject |
Novelty Detection Security |
dc.subject |
Optimisation-based Attacks |
dc.subject |
Adversarial Attacks |
dc.subject |
Denoising Auto-Encoders |
dc.title |
Security of Abstraction Based Novelty Detection in Deep Learning |
dc.type |
Thesis |
dc.contributor.department |
Department of Computer Science |
dc.contributor.faculty |
Faculty of Arts and Sciences |
dc.contributor.institution |
American University of Beirut |
dc.contributor.commembers |
Elbassuoni, Shady |
dc.contributor.commembers |
Safa, Haidar |
dc.contributor.degree |
MS |
dc.contributor.AUBidnumber |
202024179 |