Applications closed

2021 Privacy Enhancing Technologies request for proposals

About

In 2020, we launched a series of research award opportunities to support privacy-related projects in academia. Our first Privacy-Preserving Technologies (PPTs) request for proposals was met with great interest and we were pleased to award six excellent projects. We will continue this momentum and broaden our topics of interest under the Privacy-Enhancing Technologies (PETs) area in 2021.

By integrating privacy-enhancing technologies into our products, we are building trustworthy experiences that billions of people use worldwide. Our primary goal is to help design and deploy new privacy-enhancing solutions that minimize the data we collect, process, and externally share across the Facebook family of products, and to provide better tools to control, measure, and mitigate privacy risks. As we continue our work to improve privacy at Facebook, one of the key elements is learning from outside experts. We value a responsible innovation approach that anticipates how people will use technology in the future and that is the driving force behind every new app and service we build.

We are interested in PETs/PPTs that minimize data exposure and limit its purpose, while enabling a range of products and use cases (e.g., Ads, Messaging, etc). These technologies enable us to offer leading services while minimizing the data we collect, process, or retain. By integrating novel privacy-preserving technologies in our products, we aim to build trustworthy experiences that people love to use.

To foster further innovation in this area, and to deepen our collaboration with academia, Facebook is pleased to invite faculty to respond to this call for research proposals pertaining to the aforementioned topics. We anticipate awarding 5–8 awards, each in the $100,000 range. Payment will be made to the proposer’s host university as an unrestricted gift.


Award Recipients

KTH Royal Institute of Technology

Musard Balliu

HEC Montreal and MILA

Golnoosh Farnadi

ETH Zurich

Anwar Hithnawi

University of Maryland College Park

Jonathan Katz

University of Southern California

Aleksandra Korolova

University of Maryland College Park

Ian Miers

Chalmers University of Technology

Andrei Sabelfeld

Harvard University

Salil Vadhan

Indiana University Bloomington

Luyi Xing

Texas A&M University

Yupeng Zhang

Applications Are Currently CLosed

Application Timeline

Launch Date

March 18, 2021

Deadline

April 21, 2021

Winners Announced

June 2021

Areas of Interest

Areas of interest include, but are not limited to, the following.

1. Applied Cryptography

Cryptographic techniques enable us to power existing and new use cases while providing strong levels of privacy protection for user data. We are interested in research that develops novel techniques, improves scalability of existing ones, or makes them easier to adopt. Areas of interest include, but are not limited to, the following:

  • Private authentication: How can we leverage techniques such as anonymous credentials to enable authenticated but private communication between clients and servers efficiently and at scale? Are there new techniques for credential-based authentication that provide stronger privacy protections for the credential?
  • Secure computation: Secure computation is a powerful primitive, but how can we make it scale? For example, we’re interested in private identity/contact matching protocols that scale to hundreds of millions of users, or private location services that can support features at Facebook scale.
  • E2E encryption: How can we use new cryptographic techniques to improve transparency, security, integrity, and reliability of end-to-end encryption technology? Where do current techniques fall short? How should we handle transient errors and how can we make safe retry logic? How can users know that their peers’ encryption keys are correct?
  • Practical implementation: We’re interested in tools and protocols that use simple and widely supported primitives. Fewer “new block ciphers,” more “formally verified drop-in library for X.”
  • Record linkage/matching: Private set intersection and its extensions are powerful tools for independent parties to join data sets. How can we evaluate the quality of the match without revealing the underlying data sets, especially as matching conditions expand to multiple features and fuzzy logic? What quality metrics are useful, and what information leaks from those metrics?
  • Economics of trust models: Can we build trust among a large group of participants in a secure computation, while only requiring a subset of non-colluding parties performing the computation? How do the incentives of different trust models trade off with the difference in computational and operational costs of secure computation schemes with different trust models?

2. Data policies and compliance

Honoring people’s privacy necessitates that we ensure all communication to consumers about data enables them to make informed decisions. Moreover, all data storage and data usage by developers must be restricted for the intended purpose. Areas of interest include, but are not limited to, the following:

  • Deletion: How can we ensure that data is deleted correctly? Can we make infrastructure that automatically handles deletion? How about in data warehouses, which often don’t support point deletion queries?
  • Automated understanding of privacy policies: What policy languages can express a broad range of regulatory concepts? What happens if the policy changes? Can they be human-readable as well as structured? How can they connect to data at runtime without sacrificing efficiency or developer experience?
  • Data flow and lineage: In a general-purpose programming language, how can we build accurate maps of data flow? How can we best apply static analysis, dynamic analysis, symbolic execution, or other tools? How can we link up data flows across different components, languages, or platforms?
  • Information flow control (IFC): How can policy and user consents propagate with data at scale in very complex data processing systems? How can we prevent label creep issues, i.e., data becoming too restrictive through data flows?
  • Programming languages: Can modern, usable programming languages support static information flow control or lineage extraction? Can they be augmented to carry policy information along these flows?
  • Privacy economics: How do we evaluate the operational cost of privacy controls?
  • Measurement: How do we evaluate the cost of privacy failures? How do we demonstrate technical compliance with data policies?
  • Scraping risk: How do we measure the risk of data leakage posed by our products?

3. Differential Privacy

Differential privacy (DP) has emerged as an industry standard in protecting the privacy of user data while enabling useful aggregate information to be derived for usability, reliability, and machine learning needs. We are interested in research to enable new algorithms, new architectures for deployment, and new models for privacy accounting. Areas of interest include, but are not limited to, the following:

  • Making differential privacy practical: Can we extend accounting techniques to realistic query workloads on large analytics systems? Can we apply them to time-series data, or longitudinal analyses of privacy over time?
  • Measuring risk: How can we measure the risks of privacy loss or identification? What is the real-world impact of correlation or other attacks?
  • Extension to database management systems: How can we efficiently incorporate DP into database management systems?
  • Efficient combination of DP with other PETs: How can we best combine techniques for protecting data during computation (e.g., MPC) and techniques for minimizing re-identification risk of the computation outcome (e.g., DP)?
  • Understanding differentially private releases: Can we generate confidence intervals for DP releases to maximize utility or minimize compute? Can we build tools that clearly demonstrate the trade-offs between privacy and accuracy?
  • Differential privacy in deep learning: Can we improve the utility and privacy trade-off in application of DP in machine learning, and in particular, deep learning? Are there new theoretical frameworks that can help with particular threat models?

4. Privacy in AI

As applications and research of AI continue to accelerate, it’s important for AI researchers and ML practitioners to access easy-to-use tools for mathematically rigorous privacy guarantees while retaining the strong performance and speed of these AI systems. Areas of interest include, but are not limited to, the following:

  • Practical advancements for MPC-based model training: How can we extend modern training approaches such as neural architecture search to secure MPC? Is it possible to design cryptographic algorithms for superior performance on 32-bit machines? How should large (TB-sized) data sets be sharded for MPC-based training?
  • Extensions to on-device model training: How to train more performant distributed or federated models without compromising the privacy or security that has motivated on-device training?
  • Privacy leakage and attacks in deep learning: For both model training and model scoring in a secure environment (e.g., MPC, on-device FL), what information is leaked from model training and prediction?How can we minimize privacy leakage when integrating multiple (and distinct) cryptographic algorithms? How should we think about private release mechanisms in (honest-but-curious) secure MPC?
  • Post-training data deletion: What approaches to removal of data from trained machine learning models are most efficient?

Requirements

Proposals should include

  • A summary of the project (1–2 pages), in English, explaining the area of focus, a description of techniques, any relevant prior work, and a timeline with milestones and expected outcomes
  • A draft budget description (one page) including an approximate cost of the award and explanation of how funds would be spent
  • Curriculum Vitae for all project participants
  • Organization details; this will include tax information and administrative contact details

Eligibility

  • Proposals must comply with applicable U.S. and international laws, regulations, and policies.
  • Applicants must be current full-time faculty at an accredited academic institution that awards research degrees to PhD students.
  • Applicants must be the Principal Investigator on any resulting award.
  • Facebook cannot consider proposals submitted, prepared, or to be carried out by individuals residing in or affiliated with an academic institution located in a country or territory subject to comprehensive U.S. trade sanctions.
  • Government officials (excluding faculty and staff of public universities, to the extent they may be considered government officials), political figures, and politically affiliated businesses (all as determined by Facebook in its sole discretion) are not eligible.

Terms & Conditions

Facebook’s decisions will be final in all matters relating to Facebook RFP solicitations, including whether or not to grant an award and the interpretation of Facebook RFP Terms and Conditions. By submitting a proposal, applicants affirm that they have read and agree to these Terms and Conditions.

  • Facebook is authorized to evaluate proposals submitted under its RFPs, to consult with outside experts, as needed, in evaluating proposals, and to grant or deny awards using criteria determined by Facebook to be appropriate and at Facebook’s sole discretion. Facebook’s decisions will be final in all matters relating to its RFPs, and applicants agree not to challenge any such decisions.
  • Facebook will not be required to treat any part of a proposal as confidential or protected by copyright, and may use, edit, modify, copy, reproduce and distribute all or a portion of the proposal in any manner for the sole purposes of administering the Facebook RFP website and evaluating the contents of the proposal.
  • Personal data submitted with a proposal, including name, mailing address, phone number, and email address of the applicant and other named researchers in the proposal may be collected, processed, stored and otherwise used by Facebook for the purposes of administering Facebook’s RFP website, evaluating the contents of the proposal, and as otherwise provided under Facebook’s Privacy Policy.
  • Neither Facebook nor the applicant is obligated to enter into a business transaction as a result of the proposal submission. Facebook is under no obligation to review or consider the proposal.
  • Feedback provided in a proposal regarding Facebook products or services will not be treated as confidential or protected by copyright, and Facebook is free to use such feedback on an unrestricted basis with no compensation to the applicant. The submission of a proposal will not result in the transfer of ownership of any IP rights.
  • Applicants represent and warrant that they have authority to submit a proposal in connection with a Facebook RFP and to grant the rights set forth herein on behalf of their organization. All awards provided by Facebook in connection with this RFP shall be used only in accordance with applicable laws and shall not be used in any way, directly or indirectly, to facilitate any act that would constitute bribery or an illegal kickback, an illegal campaign contribution, or would otherwise violate any applicable anti-corruption or political activities law.
  • Awards granted in connection with RFP proposals will be subject to terms and conditions contained in the unrestricted gift agreement (or, in some cases, other mechanisms) pursuant to which the award funding will be provided. Applicants understand and acknowledge that they will need to agree to these terms and conditions to receive an award.