Each year, PhD students from around the world apply for the Facebook Fellowship, a program designed to encourage and support promising doctoral students who are engaged in innovative and relevant research in areas related to computer science and engineering. Fellowship recipients receive tuition funding for up to two years to conduct their research at their respective universities, independently of Facebook. To learn about award details, eligibility, and more, visit the program page below.
Learn moreAs a continuation of our Fellowship spotlight series, we’re highlighting 2020 Facebook Fellow Xinyun Chen.
Xinyun is a PhD student at UC Berkeley working with Professor Dawn Song and is expected to graduate in 2022. Her research explores the intersection of deep learning, programming languages, and security, focused on neural program synthesis and adversarial machine learning (ML).
It was a research opportunity at the National Institute of Informatics in Japan that inspired Xinyun Chen to explore ML. There, she designed and implemented an object detection system for drones, which was how she developed a passion for deep learning — a passion that places her at the forefront of deep learning research today.
Now, as a PhD student at UC Berkeley, “[m]y research addresses the grand challenges of increasing the accessibility of programming to general users, and enhancing the security and trustworthiness of ML models,” she says. “It is a complicated process, teaching a computer to think.”
Standing at the cutting edge of AI research in neural program synthesis, Xinyun has developed deep learning techniques to synthesize accurate and complex programs. “I have shown that our approaches could automatically generate programs from natural language descriptions, test cases, etc.,” she says. Her goal for future research is not only to develop new deep learning techniques to improve the performance on program synthesis, but also to draw inspiration from program synthesis techniques to achieve better generalization for a broad range of tasks.
“In other words, I am not creating AI that will take people’s jobs,” she says with a laugh. “Machines still have a long way to go. They are still very basic, and it is a very step-by-step process.”
But Xinyun’s work is advancing her field year over year, exploring how equipping a neural network with a symbolic module can synthesize programs for various tasks. So far, she reports, “our neural-symbolic model is more capable of compositional reasoning and performs better under certain distribution shifts.” This is promising, especially considering she is working to develop neural networks that are more robust than existing models while also exploring the vulnerabilities of those models as part of her research on adversarial ML.
The future of her research lies in exploring the possibilities associated with designing better pretraining and search techniques for program synthesis. Currently, her works demonstrate promising performance over strong baselines, and she is excited to further her work at top AI research institutes.
To learn more about Xinyun Chen, visit her Fellowship profile.