Date of Award

Spring 5-23-2021

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical Engineering and Computer Science

Advisor(s)

Velipasalar, Senem

Keywords

adversarial attack, adversarial examples, feature squeeze, person re-identification

Subject Categories

Electrical and Computer Engineering | Engineering

Abstract

Person re-identification (ReID) is the task of retrieving the same person, across different camera views or on the same camera view captured at a different time, given a query person of interest. There has been great interest and significant progress in person ReID, which is important for security and wide-area surveillance applications as well as human computer interaction systems. In order to continuously track targets across multiple cameras with disjoint views, it is essential to re-identify the same target across different cameras. This is a challenging task due to several reasons including changes in illumination and target appearance, and variations in camera viewpoint and camera intrinsic parameters. Brightness transfer function (BTF) was introduced for inter-camera color calibration, and to improve the performance of person ReID approaches. In this dissertation, we first present a new method to better model the appearance variation across disjoint camera views. We propose building a codebook of BTFs, which is composed of the most representative BTFs for a camera pair. We also propose an ordering and trimming criteria, based on the occurrence percentage of codeword triplets, to avoid using all combinations of codewords exhaustively for all color channels, and improve computational efficiency. Then, different from most existing work, we focus on a crowd-sourcing scenario to find and follow person(s) of interest in the collected images/videos. We propose a novel approach combining R-CNN based person detection with the GPU implementation of color histogram and SURF-based re-identification. Moreover, GeoTags are extracted from the EXIF data of videos captured by smart phones, and are displayed on a map together with the time-stamps.

With the recent advances in deep neural networks (DNN), the state-of-the-art performance of person ReID has been improved significantly. However, latest works in adversarial machine learning have shown the vulnerabilities of DNNs against adversarial examples, which are carefully crafted images that are similar to original/benign images, but can deceive the neural network models. Neural network-based ReID approaches inherit the vulnerabilities of DNNs. We present an effective and generalizable attack model that generates adversarial images of people, and results in very significant drop in the performance of the existing state-of-the-art person re-identification models. The results demonstrate the extreme vulnerability of the existing models to adversarial examples, and draw attention to the potential security risks that might arise due to this in video surveillance. Our proposed attack is developed by decreasing the dispersion of the internal feature map of a neural network. We compare our proposed attack with other state-of-the-art attack models on different person re-identification approaches, and by using four different commonly used benchmark datasets. The experimental results show that our proposed attack outperforms the state-of-art attack models on the best performing person re-identification approaches by a large margin, and results in the most drop in the mean average precision values.

We then propose a new method to effectively detect adversarial examples presented to a person ReID network. The proposed method utilizes parts-based feature squeezing to detect the adversarial examples. We apply two types of squeezing to segmented body parts to better detect adversarial examples. We perform extensive experiments over three major datasets with different attacks, and compare the detection performance of the proposed body part-based approach with a ReID method that is not parts-based. Experimental results show that the proposed method can effectively detect the adversarial examples, and has the potential to avoid significant decreases in person ReID performance caused by adversarial examples.

Access

Open Access

Share

COinS