I’m a PhD student in the Stanford NLP Group, advised by Dan Jurafsky. My research interests are broadly in model interpretability, fairness, and evaluation. In particular, my work studies the theoretical limitations of methods used to evaluate NLP models and the practical consequences these limitations may have. Previously, I received a B.Sc. and M.Sc. in Computer Science from the University of Toronto, where I was a BMO National Scholar and John H. Moss Scholar. I’ve also worked at Google AI and collaborated with members of Facebook AI Research. In 2019, I received a Best Paper award at ACL 2018’s RepL4NLP.