FURI | Fall 2024

Synthesizing Interpretable Agents for Cybersecurity Contexts with Code Evolution of Augmenting Topologies

Security icon, disabled. A blue padlock, locked.

The objective is to create a model for evolving code that is more interpretable than conventional neural nets and can be used for the same reinforcement learning tasks. Researchers have been able to synthesize functions for test problems with basic integer operations. At a small scale, this demonstrates parity with existing neural-net-based solutions on simple tasks. Next steps will be moving on to bigger reinforcement learning tasks, and implementing more capabilities like external function calls and control flow. Interfacing with external functions would allow for solving cybersecurity-related tasks.

Student researcher

Alexander Ng

Computer science

Hometown: Los Altos, California, United States

Graduation date: Spring 2025