Isaac Liao

I am a first year PhD student in the Machine Learning Department at Carnegie Mellon University, advised by Albert Gu. I recently completed my Master's degree under Max Tegmark in the Tegmark AI Safety Group at MIT, researching mechanistic interpretability. I double majored in Computer Science and Physics at MIT, and did research on meta-learned optimization in the lab of Marin Soljačić during my undergrad years. Within machine learning, my interests include the minimum description length, variational inference, hypernetworks, meta-learning, optimization, and sparsity.

In my leisure time, I enjoy skating, game AI programming, and music. I won the Battlecode AI Programming Competition in 2022. I was a silver medalist in the International Physics Olympiad (IPhO) in 2019 and an honorable mention in IPhO 2018.

Email  /  Resume  /  Google Scholar  /  ORCID

profile photo

Machine Learning Research

Not All Language Model Features Are Linear
Joshua Engels, Isaac Liao, Eric J. Michaud, Wes Gurnee, Max Tegmark
arXiv, 2024

When large language models do modular addition, the numbers are stored in a circle.

Generating Interpretable Networks using Hypernetworks
Isaac Liao, Ziming Liu, Max Tegmark
arXiv, 2023

When we generatively model a neural network's weights, we tend to generate weights that are smartly arranged.

Learning to Optimize Quasi-Newton Methods
Isaac Liao, Rumen R. Dangovski, Jakob N. Foerster, Marin Soljačić
TMLR, 2023

We can feed gradients as input into a linear neural network, get a step as an output, and train the network to perform optimization, during the optimization.

Opening the AI Black Box: Program Synthesis via Mechanistic Interpretability
Eric J. Michaud, Isaac Liao, Vedang Lad, Ziming Liu, Anish Mudide, Chloe Loughridge, Zifan Carl Guo, Tara Rezaei Kheirkhah, Mateja Vukelić, Max Tegmark
arXiv, 2024

We auto-convert RNNs into interpretable python code equivalents, for model verification.

Research-like Class Projects

Bayesian Recommender Systems
Isaac Liao
6.7830 Bayesian Modeling and Inference

We improve targeted movie recommendation systems by 2%, by using Bayesianism to add uncertainty into SVD-based large matrix completion algorithms.

Parameter-Efficient Approximation by Exploitation of Sparsity
Isaac Liao
6.7910 Statistical Learning Theory

This sparse neural network can imitate almost all other neural network architectures that have about the same amount of parameters.

Differential Entropy Codes for Trained Image Compression
Isaac Liao
6.819 Advances in Computer Vision

Reinvented most of the framework that powers VAEs and BNNs, from the lens of image compression.

A Perturbative Approach to Random Matrix Spectra
Isaac Liao
8.06 Quantum Physics III

We rederive the spectrum of random (Gaussian) Hermitian matrices.

Education Research

Utility Teaching Assistant for MIT 8.01 Classical Mechanics I

We use large language models to generate physics problems to help teach ~700 students. Resulted in publishing "Streamlining Physics Problem Generation to Support Physics Teachers in Using Generative Artificial Intelligence" (Shams El-Adawy, Isaac Liao, Vedang Lad, Mohamed Abdelhafez, Peter Dourmashkin) in The Physics Teacher, 2024.

Miscellaneous

Swarm Intelligence for MIT Battlecode AI Programming Competition

I placed seventh worldwide in the Battlecode 2021 Game AI programming competition, and this is my strategy report.


This website was produced from a template made by Jon Barron.