Using the recently deployed Facebook Blood Donation tool, we conduct the first large-scale algorithmic matching of blood donors with donation opportunities. In both simulations and real experiments we match potential donors with opportunities, guided by a machine learning model trained on prior observations of donor behavior.
Social comparison is a common focus in discussions of online social media use and differences in its frequency, causes, and outcomes may arise from country or cultural differences. To understand how these differences play a role in experiences of social comparison on Facebook, a survey of 37,729 people across 18 countries was paired with respondents’ activity on Facebook.
The Facebook company is partnering with academic institutions to support COVID-19 research and to help inform public health decisions. Currently, we are inviting Facebook app users in the United States to take a survey collected by faculty at Carnegie Mellon University (CMU) Delphi Research Center, and we are inviting Facebook app users in more than 200 countries or territories globally to take a survey collected by faculty at the University of Maryland (UMD) Joint Program in Survey Methodology.
In this paper, we present CLARA (Confidence of Labels and Raters), a system developed and deployed at Facebook for aggregating reviewer decisions and estimating their uncertainty. We perform extensive validations and describe the deployment of CLARA for measuring the base rate of policy violations, quantifying reviewers’ performance, and improving their efficiency.
We study a problem arising in statistical analysis called the minimum bottleneck generalized matching problem that involves breaking up a population into blocks in order to carry out generalizable statistical analyses of randomized experiments.
As companies increasingly rely on experiments to make product decisions, precisely measuring changes in key metrics is important. Various methods to increase sensitivity in experiments have been proposed, including methods that use pre-experiment data, machine learning, and more advanced experimental designs. However, prior work has not explored modeling heterogeneity in the variance of individual experimental users. We propose a more sensitive treatment effect estimator that relies on estimating the individual variances of experimental users using pre-experiment data.
Since many problems are efficiently solved using specific operators, Nevergrad therefore now enables using specific operators within generic algorithms: the underlying structure of the problem is user-defined information that several families of optimization methods can use and benefit upon. We explain how this API can help analyze optimization methods and how to use it for the optimization of a structured Photonics physical testbed, and show that this can produce significant improvements.