Isaac Liao
I am a first year PhD student in the Machine Learning Department at Carnegie Mellon University, advised by Albert Gu. I recently completed my Master's degree under Max Tegmark in the Tegmark AI Safety Group at MIT, researching mechanistic interpretability. I double majored in Computer Science and Physics at MIT, and did research on meta-learned optimization in the lab of Marin Soljačić during my undergrad years. Within machine learning, my interests include the minimum description length, variational inference, hypernetworks, meta-learning, optimization, and sparsity.
In my leisure time, I enjoy skating, game AI programming, and music. I won the Battlecode AI Programming Competition in 2022. I was a silver medalist in the International Physics Olympiad (IPhO) in 2019 and an honorable mention in IPhO 2018.
Email  / 
Resume  / 
Google Scholar  / 
ORCID
|
|
Machine Learning Research
|
|
Not All Language Model Features Are Linear
Joshua Engels,
Isaac Liao,
Eric J. Michaud,
Wes Gurnee,
Max Tegmark
arXiv, 2024
When large language models do modular addition, the numbers are stored in a circle.
|
|
Generating Interpretable Networks using Hypernetworks
Isaac Liao,
Ziming Liu,
Max Tegmark
arXiv, 2023
When we generatively model a neural network's weights, we tend to generate weights that are smartly arranged.
|
|
Learning to Optimize Quasi-Newton Methods
Isaac Liao,
Rumen R. Dangovski,
Jakob N. Foerster,
Marin Soljačić
TMLR, 2023
We can feed gradients as input into a linear neural network, get a step as an output, and train the network to perform optimization, during the optimization.
|
|
Opening the AI Black Box: Program Synthesis via Mechanistic Interpretability
Eric J. Michaud,
Isaac Liao,
Vedang Lad,
Ziming Liu,
Anish Mudide,
Chloe Loughridge,
Zifan Carl Guo,
Tara Rezaei Kheirkhah,
Mateja Vukelić,
Max Tegmark
arXiv, 2024
We auto-convert RNNs into interpretable python code equivalents, for model verification.
|
Research-like Class Projects
|
|