Alexander Ng
Computer science
Hometown: Los Altos, California, United States
Graduation date: Spring 2025
Additional details: Honors student
FURI | Fall 2024
Synthesizing Interpretable Agents for Cybersecurity Contexts with Code Evolution of Augmenting Topologies
The objective is to create a model for evolving code that is more interpretable than conventional neural nets and can be used for the same reinforcement learning tasks. Researchers have been able to synthesize functions for test problems with basic integer operations. At a small scale, this demonstrates parity with existing neural-net-based solutions on simple tasks. Next steps will be moving on to bigger reinforcement learning tasks, and implementing more capabilities like external function calls and control flow. Interfacing with external functions would allow for solving cybersecurity-related tasks.
Mentor: Stephanie Forrest
Featured project | Fall 2024
Alexander Ng is interested in computer security and using automation to make tasks easier. Combining these interests, the computer science senior is working on research to advance artificial intelligence, or AI, capabilities in security contexts. With faculty mentor Stephanie Forrest, a professor of computer science and engineering, Ng’s FURI research project involves creating an AI model that generates understandable code. Working under the theory that you can’t secure what you can’t understand, Ng seeks to create explainable models better suited for cybersecurity-related tasks.
What made you want to get involved in FURI, and why did you choose the project you’re working on?
I had a project idea I wanted to research, and this program gave me the opportunity to develop it. I’ve long been interested in evolutionary algorithms and have seen great use of neuro-evolutionary techniques in handling reinforcement learning tasks.
I’ve also been watching the recent trends in artificial intelligence and was dismayed to see that while the scale of models balloons toward higher complexity, our understanding of it lags farther and farther behind; we do not understand the machines we’ve created. This limits its applicability in another area of interest: cybersecurity.
In critical and security infrastructure, we must limit our use of AI-driven agents to those we actually understand. Bugs in code can be fixed, but bugs in a machine learning model with millions of parameters under fuzzy edge conditions are troublesome, to say the least. But what if our AI agents could be the code itself? This is the idea I’ve sought to explore: using genetic programming with neuro-evolutionary techniques to create more interpretable agents.
How will your research project impact the world?
We’re building this with applicability in mind. Plenty of academic projects with great ideas die out without being adopted into actual usage, oftentimes simply due to implementation choices. We chose a format of a widely adopted, modern technology, WebAssembly, and are developing it in a performant and popular language called Rust.
Hopefully, we validate our hypothesis and can demonstrate the power behind our proposed model. If it works, then given a problem with a well-defined fitness function, we have the potential to generate agents to handle arbitrary problems across numerous domains. For instance, in the space of antivirus and intrusion detection, instead of purely hand-crafting filters or using an opaque neural net, one would be able to use this method to generate an agent that could scan one’s system for malicious patterns. Programmers could then fine-tune the model as code.
Have there been any surprises in your research?
Exploratory research is full of surprises. You try something, and it doesn’t do as well as you’d hoped. You try something else that you’d written off, and it does way better. You don’t know until you try, and that’s the interesting part!
What is the best advice you’ve gotten from your faculty mentor?
Start small; fail early. Validate the ideas that work, filter out the ones that don’t and iterate.
I have a tendency to become overambitious and try to optimize prematurely, but in reality, research is done best when experiments can be quick to yield results. The quick turnaround allows you to hone in on a successful idea and know what direction to go. Knowing when to build things to last versus building things quickly is a great skill to learn.
Why should other students get involved in FURI?
If you’re interested in doing research, then join a lab and get your feet wet. Many professors — or even grad students — have ideas they don’t have time to commit to researching themselves but would be happy to mentor you in exploring. If you have a specific idea you can pitch, even better! It’s a great experience.