Date of Award

8-22-2025

Date Published

September 2025

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical Engineering and Computer Science

Advisor(s)

Mustafa Gursoy

Second Advisor

Senem Velipasalar

Keywords

adversarial learning;federated learning;reinforcement learning;wireless networks

Abstract

Recent years have seen rapid advances in machine learning applications, such as deep reinforcement learning (DRL) for optimizing wireless resource allocation and federated learning (FL) for enabling privacy-preserving, distributed model training. In these systems, machine learning agents learn and control resources at the edge. However, these systems face significant vulnerabilities to adversarial threats. In a DRL-based mobile communication scenario, an intelligent attacker can selectively jam specific channels and cause critical failure in victim users' performance. In an FL system, a privacy adversary who eavesdrops on shared gradient information can reconstruct the users' private data. This dissertation focuses on both types of threats and explores corresponding defense strategies. In DRL-based systems, intelligent jammers can strategically disrupt selected communication channels to significantly degrade performance, even under limited jamming resources. To mitigate the threats on wireless services from potential intelligent jammers, we analyze the vulnerabilities of DRL-based resource allocation agents, and propose a Nash equilibrium based policy ensemble as defense against potential jamming attacks. This proposed method outperforms single-policy agents and also agents with existing policy ensembles, in both adversarial environments with jammers and non-adversarial settings. In FL, gradient information shared during training can potentially leak sensitive user data. Such vulnerabilities pose serious challenges to the security and privacy of machine learning-enabled wireless and edge systems. While optimization-based inference attacks often fail to maximally utilize the information from gradients, we develop the maximum knowledge orthogonality reconstruction as an analytical inference attack under the assumption of a malicious server to fully exploit the potential of privacy attacks. Our analysis reveals that inference attacks can reconstruct input data from a batch whose size is at most equivalent to the magnitude of the output dimension. Given this observation, we propose a defense strategy against arbitrary inference attacks by disconnecting certain links between clients in decentralized FL (DFL). Specifically, we propose a cyclic topology design as an effective trade-off between training performance and defense against an inference attacker, by having potential attackers access only highly mixed, large batches. In summary, deep learning agents are vulnerable to carefully crafted attacks, and developing robust solutions requires a deep understanding of these threats. This dissertation focus on preserving performance under such adversarial conditions.

Access

Open Access

Share

COinS