Aaron Mueller

Aaron Mueller


Zuckerman postdoctoral fellow working with Yonatan Belinkov and David Bau on interpretability and robustness in language models. Incoming Assistant Professor at Boston University Computer Science (Fall 2025).

Email: λ@northeastern.edu, where λ=aa.mueller

About

I am an incoming Assistant Professor at Boston University Computer Science starting in Fall 2025. I will be hiring PhD students to work on NLP, interpretability, evaluation, and efficient language modeling. Apply to BU CS by December 15 if you'd like to work with me! See Prospective Students page for more information.

I am interested in evaluating and improving the robustness of NLP systems. My work spans causal and mechanistic interpretability methods; evaluations of language models inspired by linguistic principles and findings in cognitive science; and building more sample-efficient language models.

I completed by Ph.D. in Computer Science at the Center for Language and Speech Processing at Johns Hopkins University under the supervision of Tal Linzen and Mark Dredze. My dissertation analyzed the behaviors and mechanisms underlying emergent syntactic abilities in neural language models. My Ph.D. studies were supported by a National Science Foundation Graduate Research Fellowship.

I completed my B.S. in Computer Science and B.S. in Linguistics at the University of Kentucky, where I was a Gaines Fellow and Patterson Scholar. My thesis, which focused on neural machine translation for low-resource French dialects, was advised by Ramakanth Kavuluru and Mark Richard Lauersdorf.


News

2024/07

New preprint! Counterfactuals are everywhere in mech interp, but they have key issues that will bias our results if we're not careful.

2024/07

New preprint! NNsight and NDIF are tools for democratizing access to and control over the internals of large foundation models.

2024/07

New preprint on the benefits of human-scale language modeling

2024/07

Invited talks at Saarland University and EPFL

2024/06

Invited talk at Maastricht University

2024/06

Presented a paper at NAACL

2024/04

Invited talk at UCSB

2024/03

New preprint! We propose sparse feature circuits to discover and edit mechanisms of LM behavior.

2024/03

Invited talk at Nokia Bell Labs

2024/02

Invited talks at Brown University and University of Pittsburgh

2024/01

Our paper on function vectors was accepted to ICLR

2023/12
2023/12

The Inverse Scaling Prize was featured in TMLR

2023/11

New preprint: in-context learning yields different behaviors on ID vs. OOD examples

2023/07

Our paper received an outstanding paper award

2023/07

4 papers at ACL. See you in Toronto!

2023/05

The BabyLM Challenge was featured in the New York Times

2023/01

Organizing the BabyLM Challenge

2022/12

Invited talks at Bar-Ilan University and the Technion

2022/12

Presented a paper at CoNLL