Hi! I’m Albert.

I’m a computational linguistics PhD and a philosophy MA at Brown University advised by Prof. Ellie Pavlick. Quick update: I’m now a senior research scientist at Google DeepMind. I’m one of the main RL ICs in Gemini RL. I apologize that I probably won’t be able to keep this website up to date.

I used to study representations in democracies (almost did a PhD in political science), but then I did a PhD on representations of language and cognition, focusing on two directions:

  1. Instruction-tuned models. I co/first-authored various large language models (T0, Flan-T5/PaLM, and BLOOM), with a focus on zero-shot generalization to unseen tasks that go beyond statistical pattern matching.

  2. Finding where in pretraining and instruction-tuning corpora do models acquire zero-shot and few-shot abilities, understanding how models generalize at test time and match human behaviors in some profound ways (e.g., Webson et al., 2023).

Anonymous Feedback

Did I do something inconsiderate, say something inappropriate, or made any mistakes big or small? Or did I do something good that I should amplify in the future? I will be very grateful if you can let me know through this anonymous form.

 

With my beloved PhD advisor, Ellie Pavlick, after my thesis defense.

With my beloved students, Qinan Yu and Alyssa Loo, after their first co-first-author paper. Qinan was accepted into Stanford PhD, and Alyssa was hired by Google DeepMind (Blueshift team).