Date of Award

5-12-2024

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical Engineering and Computer Science

Advisor(s)

Senem Velipasalar

Abstract

With the advancement of hardware and algorithms, Deep Neural Networks (DNNs) have become ubiquitous in various real-world applications, some of which are highly sensitive to security and privacy issues. These applications include perception systems in autonomous vehicles, indoor surveillance systems, and medical image processing. Recent years have seen an increased focus on understanding the vulnerabilities of DNNs and the development of various adversarial attack and defense methods. This thesis delves into the detection of adversarial examples and the utilization of these adversarial examples for data privacy protection. Adversarial examples pose significant risks to the deployment of DNN applications, encouraging researchers to develop defense methods against such attacks. However, attack methodologies often outpace defensive strategies. One proposed solution is to detect and reject adversarial examples in real-world applications. This thesis first introduces two different methods for detecting adversarial examples: one targets detecting adversarial examples generated within an L_inf budget, utilizing the textures of these examples, and the other approach focuses on detecting adversarial examples in autonomous driving scenarios, by developing a novel distance metric. In the autonomous driving scenario, when adversarial examples are detected, it may be necessary to hand over the control of the car to the driver. In this situation, it is critical to autonomously monitor the driver’s behavior, i.e. whether they are distracted or not, to know their readiness for taking over. Motivated by this, this thesis points out the existing issues with the evaluation approaches of some previous works and also describes a method for monitoring driver behavior in naturalistic driving scenarios using images from a lower-resolution camera mounted on the forward windshield next to the rear-view mirror. In our case, the camera does not directly face the driver, and provides more of an oblique view making the driver head pose detection more challenging. Furthermore, this thesis presents a novel approach that uses adversarial examples for data privacy. Deep neural networks are extensively applied to real-world tasks, where privacy and data protection are critical. Unprotected image data can be exploited to infer personal or contextual information. Existing privacy preservation methods, like encryption, generate perturbed images that are unrecognizable to even humans. In contrast, adversarial attack approaches develop universal attacks and prohibit automated inference even for authorized stakeholders. In this thesis, a first-of-its kind approach is presented which tackles an unexplored, practical privacy preservation use case by generating human-perceivable images that maintain accurate inference by an authorized model while evading unauthorized black-box models of similar or dissimilar objectives. This approach is referred to as the E-MUSEUM (Exclusive Model authorization for USer data protection by Evading Unauthorized Models). We show that the generated images can successfully maintain the accuracy of an authorized model and degrade the accuracy of the unauthorized black-box models.

Access

Open Access

Share

COinS