Date of Award
Fall 12-2022
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Computer Science
Committee Director
Cong Wang
Committee Director
Ravi Mukkamala
Committee Member
Rui Ning
Committee Member
Lusi Li
Committee Member
Chunsheng Xin
Abstract
With the explosive growth of images on the internet, image retrieval based on deep hashing attracts spotlights from both research and industry communities. Empowered by deep neural networks (DNNs), deep hashing enables fast and accurate image retrieval on large-scale data. However, inheriting from deep learning, deep hashing remains vulnerable to specifically designed input, called adversarial examples. By adding imperceptible perturbations on inputs, adversarial examples fool DNNs to make wrong decisions. The existence of adversarial examples not only raises security concerns for real-world deep learning applications, but also provides us with a technique to confront malicious applications.
In this dissertation, we investigate privacy and security concerns in deep hashing image retrieval systems related to adversarial examples. Starting with a privacy concern, we stand on users side to preserve privacy information in images, which can be extracted by adversaries by retrieving similar images in image retrieval systems. Existing image processing-based privacy-preserving methods suffer from a trade-off of efficacy and usability. We propose a method introducing imperceptible adversarial perturbations on original images to prevent them from being retrieved. Users upload protected adversarial images instead of the original images to preserve privacy while maintaining usability. Then we shift to the security concerns. We act as attackers, proactively providing adversarial images to retrieval systems. These adversarial examples are embedded to specific targets so that the user retrieval results contain our unrelated adversarial images, e.g., users query with a “Husky dog” image, but retrieve adversarial “dog food” images in the result. A transferability-based attack is proposed for black-box models. We improve black-box transferability with the random noise as the proxy in optimization, achieving state-of-the-art success rate. Finally, we stand on retrieval systems side to mitigate the security concerns of adversarial attacks in deep hashing image retrieval. We propose a detection method that detects adversarial examples in the inference time. By studying unique adversarial behaviors in deep hashing image retrieval, our proposed method is constructed on criterions of these adversarial behaviors. The proposed method detects most of the adversarial examples with minimum overhead.
Rights
In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
Copyright, 2023, by Yanru Xiao, All Rights Reserved.
DOI
10.25777/w13h-2w96
ISBN
9798371976369
Recommended Citation
Xiao, Yanru.
"Towards Privacy and Security Concerns of Adversarial Examples in Deep Hashing Image Retrieval"
(2022). Doctor of Philosophy (PhD), Dissertation, Computer Science, Old Dominion University, DOI: 10.25777/w13h-2w96
https://digitalcommons.odu.edu/computerscience_etds/137