I am a 4th year PhD student at MIT CSAIL advised by Jacob Andreas and Julie Shah. I’ve spent wonderful summers at the Boston Dynamics AI Institute, MIT-IBM Watson AI Lab, Facebook AI Research (FAIR), and before grad school, two years as an AI Resident at Microsoft Research. I did my undergrad at Yale, where I got my start in research with Brian Scassellati and read a lot of dead philosophers.
I’m interested in building agents that learn representations from rich human knowledge, whether directly (e.g. from users) or through priors (e.g. from LMs). Currently, I’m thinking a lot about how to utilize pretrained models in conjunction with human feedback to interactively learn aligned representations/rewards.
A history buff at heart, I care deeply about working with non-academic communities to create safe, ethical, and equitable AI. I currently serve as a Special Government Employee for the Defense Innovation Unit (DIU). In a previous life, I worked at the White House Office of Science and Technology Policy (OSTP), National Institute of Standards and Technology (NIST), and Schmidt Futures. I also serve on the advisory board of the Yale Jackson School of Global Affairs, where I co-teach a course on AI for policymakers.
I love being outdoors, even in the brutal Boston winters. A current goal is to run a sub-3:00 marathon (this is how I’m doing). Reach out to chat about research, policy, or running! Preferred subject line: Your cat is dope.
email | cv | google scholar | twitter | linkedin
Ph.D. Computer Science, 2023 -
Massachusetts Institute of Technology
M.S. Computer Science, 2023
Massachusetts Institute of Technology
B.S. Cognitive Science, 2018
Yale University
B.A. Global Affairs, 2018
Yale University
[Mar 2024] Attending HRI! Our papers Aligning Human and Robot Representations and Preference-Conditioned Language-Guided Abstraction will be presented in the main conference.
[Feb 2024] I am giving talks at the MILA Robot Learning Seminar and UC Berkeley.
[Feb 2024] I will be visiting Constellation, an AI safety research center, for two weeks!
[Jan 2024] Our paper Learning with Language-Guided State Abstractions was accepted to ICLR 2024.
[Dec 2023] Attending NeurIPS! I’m presenting Human-Guided Complexity-Controlled Abstractions in the main conference and co-organizing the Goal-Conditioned Reinforcement Learning Workshop.
[Nov 2023] Our paper Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback was accepted to TMLR.