Date of Award
12-24-2025
Date Published
January 2026
Degree Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Electrical Engineering and Computer Science
Advisor(s)
Garrett Katz
Second Advisor
Jaime Banks
Keywords
Reinforcement Learning
Subject Categories
Computer Sciences | Physical Sciences and Mathematics
Abstract
Evolution-inspired algorithms have proven effective for complex optimization problems butsuffer from computational inefficiency due to their reliance on random variation operators. This is problematic in domains where fitness evaluation depends on expensive procedures such as training a neural network or running a robot, either in simulation or on hardware. This dissertation presents novel approaches for evolutionary robotics that replace stochastic evolutionary operations with learned, adaptive strategies using reinforcement learning (RL), significantly improving search efficiency while maintaining population diversity.The first contribution is a voxel-based evolutionary framework for generating adversarial objects that challenge robotic grasping systems. By evolving objects with controlled similarity metrics and graspability fitness functions, we systematically identify failure modes in traditional kinematics-based manipulation algorithms. Our experiments demonstrate that the object generator produces increasingly difficult test cases that expose weaknesses in an existing grasping algorithm. The second major contribution is an RL-guided evolutionary algorithm (EA), where an RL agent learns to generate offspring by observing parent genomes and population statistics. Using Simple Policy Optimization with a multi-component reward function balancing improvement, ranking, diversity, and fitness, the agent transforms evolutionary search from a purely stochastic process into a directed optimization approach. Empirical evaluation on four common EA benchmark functions (Sphere, Rastrigin, Ackley, Rosenbrock) shows superior convergence rates and solution quality compared to a standard evolutionary algorithm, while reducing wasted fitness evaluations. The dissertation culminates in a unified adversarial co-learning framework where an RL- based grasping agent and an RL-guided evolutionary object generator engage in a minmax game. Both systems optimize the same reward signal in opposite directions: The ob?ject generator maximizes grasp failure rate, while the grasping agent minimizes it. This tightly coupled optimization creates an arms race that drives both systems toward greater capability, producing robust manipulation policies and comprehensive test suites simultaneously.These contributions demonstrate that learning-based guidance can fundamentally improve evolutionary computation efficiency, on optimization benchmarks as well as real-world robotic systems. The work establishes a foundation for future research in adaptive evolutionary algorithms and adversarial training paradigms.
Access
Open Access
Recommended Citation
Akshay, Unknown, "ADAPTIVE GENERATION IN EVOLUTIONARY ROBOTICS: FROM ADVERSARIAL OBJECTS TO GUIDED OPTIMIZATION" (2025). Dissertations - ALL. 2241.
https://surface.syr.edu/etd/2241
