Zuckerman postdoctoral fellow working with Yonatan Belinkov and David Bau on interpretability and robustness in (large) language models.
Email: λ@northeastern.edu, where λ=aa.mueller
I am interested in evaluating and improving the robustness of NLP systems. My work spans causal, behavioral, and mechanistic interpretability methods; targeted model editing; and evaluations of the linguistic abilities and inductive biases of pre-trained language models.
I completed by Ph.D. in Computer Science at the Center for Language and Speech Processing at Johns Hopkins University under the supervision of Tal Linzen and Mark Dredze. My dissertation analyzed the behaviors and mechanisms underlying emergent syntactic abilities in neural language models. My Ph.D. studies were supported by a National Science Foundation Graduate Research Fellowship.
I completed my B.S. in Computer Science and B.S. in Linguistics at the University of Kentucky, where I was a Gaines Fellow and Patterson Scholar. My thesis, which focused on neural machine translation for low-resource French dialects, was advised by Ramakanth Kavuluru and Mark Richard Lauersdorf.
New preprint: ICL yields different behaviors on ID vs. OOD examples
New preprint—my first with David Bau's group!
Our paper received an outstanding paper award!
4 papers at ACL. See you in Toronto!
The BabyLM Challenge was featured in the New York Times
Organizing the BabyLM Challenge
Invited talks at Bar-Ilan University and the Technion
Presented a paper at CoNLL
Interned at Meta Research in Menlo Park
Presented two papers at ACL 2022 (one main conference, one in Findings)
Invited talk at the National Research Council of Canada