On February 24, Facebook launched a request for proposals (RFP) on sample-efficient sequential Bayesian decision making, which closes on April 21. With this RFP, the Facebook Core Data Science (CDS) team hopes to deepen its ties to the academic research community by seeking out innovative ideas and applications of Bayesian optimization that further advance the field. To provide an inside look from the team behind the RFP, we reached out to Eytan Bakshy and Max Balandat, who are leading the effort within CDS.
View RFPBakshy leads the Adaptive Experimentation team, which seeks to improve the throughput of experimentation with the help of machine learning and statistical learning. Balandat supports the team’s efforts on modeling and optimization, which is primarily focused on probabilistic models and Bayesian optimization. In this Q&A, Bakshy and Balandat contextualize the RFP by sharing more information about how the work of their team relates to the areas of interest for the call.
Q: What’s the goal of this RFP?
A: Primarily, we are keen to learn more about all the great research that is going on in this area. Conversely, we are also able to share a number of really interesting real-world use cases that we hope can help inspire additional applied research, and increase interest and research activity into sample-efficient sequential Bayesian decision making. Lastly, we aim to further strengthen our ties to academia and our collaborations with academics who are at the forefront of this area.
We are both excited to dive in and learn about creative applications and approaches to Bayesian optimization that researchers come up with in their proposals.
Q: What inspired you to launch this RFP?
A: We publish quite a bit in top-tier AI/ML venues, and all our papers are informed by very practical problems we face every day in our work. The need for exploring large design spaces via experiments with a limited budget is widespread across Facebook, Instagram, and Facebook Reality Labs. Much of our team’s work focuses on applied problems to help support the company and use-inspired basic research, but it is clear that there are plenty of ideas out there that can advance the area of sample-efficient sequential decision making, such as Bayesian optimization and related techniques.
In academia, it can sometimes be challenging to understand what exactly the most relevant and impactful “real-world” problems are. Conversely, academics may have an easier time taking a step back, looking at the bigger picture, and doing more exploratory research. With this RFP, we hope to help bridge this gap and foster increased collaboration and cross-pollination between industry and academia.
Q: What is Bayesian optimization and how is it applied at Facebook?
A: Bayesian optimization is a set of methodologies for exploring large design spaces on a limited budget. While Bayesian optimization is frequently used for hyperparameter optimization in machine learning (AutoML), our team’s work had originally been motivated by the use of online experiments (A/B tests) for optimizing software and recommender systems.
Since then, the applications for Bayesian optimization have expanded tremendously in scope, with applications ranging from the design of next-generation AR/VR hardware, to bridging the gap between simulations and real-world experiments, to efforts around providing affordable connectivity solutions to developing countries.
The main idea behind Bayesian optimization is to fit a probabilistic surrogate model to the “black-box” function one is trying to optimize, and then use this model to inform at which new parameters to evaluate the function next. Doing so allows for a principled way of trading off reducing uncertainty with exploring promising regions of the parameter space. As described earlier, we apply this approach to a large variety of problems in different domains at Facebook.
Q: What is BoTorch, and how does it relate to the RFP?
A: The adaptive experimentation team has been investing in methodological development and tooling for Bayesian optimization for over five years. A few years ago, we found that our current tooling was slowing down researchers’ ability to generate new ideas and engineers’ ability to scale out Bayesian optimization use cases.
You may be interested inBOTORCH: A Framework for Efficient Monte-Carlo Bayesian Optimization
To address these problems, we developed BoTorch, a framework for Bayesian optimization research, and Ax, a robust platform for adaptive experimentation. BoTorch follows the same modular design philosophy as PyTorch, which makes it very easy for users to swap out or rearrange individual components in order to customize all aspects of their algorithm, thereby empowering researchers to do state-of-the art research on modern Bayesian optimization methods. By exploiting modern parallel computing paradigms on both CPUs and GPUs, it is also fast.
BoTorch has really changed the way we approach Bayesian optimization research and accelerates our ability to tackle new problems. With the RFP, we hope to attract more widespread interest in this area and raise awareness of our open source tools.
Q: Where can people stay updated and learn more?
A: We actively engage with researchers on Twitter, so follow @eytan and @maxbalandat for the latest research, and always feel free to reach out to us via Twitter, email, or GitHub Issues if you have any questions or ideas.
You can find the latest and greatest of what we are working on in our open source projects, BoTorch and Ax. It also helps to keep an eye out for our papers in machine learning conferences, such as NeurIPS, ICML, and AISTATS.
Applications for the RFP on sample-efficient sequential Bayesian decision making close on April 21, 2021, and winners will be announced the following month. To receive updates about new research award opportunities and deadline notifications, subscribe to our RFP email list.