Applications closed

Building Tools to Enhance Transparency in Fairness and Privacy request for proposals

About

Ensuring data driven systems are reliably aligned with privacy, security, safety, fairness, and robustness expectations is of foundational importance. The potential for AI to benefit society — through personalized experiences, better science, assistive technology, and generating opportunity — is a captivating promise. But without a clear, reliable ability to understand and detect issues such as privacy risks and fairness harms, trust in data driven systems and AI-powered tools is likely to remain elusive. The goal of this RFP is to help academics build the trusted tools to more effectively monitor systems to help spot concerns in areas like fairness, privacy, and safety.

If privacy researchers, advocates and other stakeholders can more confidently monitor AI systems along important dimensions, researchers can in turn more freely innovate AI advances in ways that can benefit society. Facebook aims to invest in such tooling to improve fairness and privacy to support positive societal change.

We further believe private sector organizations should actively collaborate on solutions with academics, advocates, and regulators to improve consumer privacy and uphold societal values. To do this, we aim to invest in efforts that increase transparency by empowering privacy stakeholders to evaluate systems for their performance in important domains like privacy, safety, interpretability, fairness, and robustness. Specifically, we believe the following are key areas where enhanced transparency and accountability would be valuable:

  • Tooling should facilitate insights that address the public interest and mandatory obligations. The monitoring tools developed should facilitate insights into how systems are meeting societal expectations and aligning with ethical values, as well as demonstrate that systems are meeting any commitments that have been made about them or any requirements they may be subject to. Basic metrics of fairness and privacy preservation, for example, are foundational for maximizing social benefits and for continued academic research to push AI systems forward towards greater benefits for society.
  • Tooling should not necessarily require consent or participation by the investigated party. Many ML algorithms can be interpreted or evaluated as black-box functions, without knowledge of their inner underlying model. Similarly, ML systems should be able to be transparently evaluated by parties other than those who built them, while respecting data privacy, proprietary information, or without necessarily relying on cooperation of model owners.
  • Monitoring costs should be sufficiently low to facilitate broad & frequent assessments. Rare inspections and elaborate procedures that are time consuming and delayed will likely lower oversight and responsiveness which in turn will decrease trust with the public. Much as monitoring and verification successfully formed the basis of trust in other important domains, similarly constant inexpensive monitoring and quick detection of potential threats from ML systems will facilitate the trust required to enable an ecosystem of AI to address increasingly complex societal problems.

To foster further innovation in this area, and to deepen our collaboration with academia, Facebook is pleased to invite faculty to respond to this call for research proposals pertaining to the aforementioned topics. We anticipate awarding up to six awards, for up to $100,000 each. Payment will be made to the proposer’s host university as an unrestricted gift.


Award Recipients

Carnegie Mellon University

Hoda Heidari

University of Wisconsin–Madison

Sharon Li

University of Toronto

Nicolas Papernot

University Carlos III de Madrid

Ruben Cuevas Rumin

National University of Singapore

Reza Shokri

University of Massachusetts Amherst

Philip Thomas

Applications Are Currently CLosed

Application Timeline

Launch Date

June 29, 2021

Deadline

August 18, 2021

Winners Announced

December 2021

Areas of Interest

Areas of interest include, but are not limited to, the following:

1. Privacy Leakage Detection

Violations of user privacy should be detected by monitoring and detection of information leakage. For example, poisoning analyses, where faulty user data is fed to the system and then detected emerging in a reconstructable fashion from the predictions, are an exemplar approach. We are interested in supporting tools that automate detection of privacy risks, particularly of black box systems.

2. Safety

AI systems should have safeguards and processes in place to actively prevent harms. Safeguards themselves need to be trusted and transparent. We are interested in novel approaches to linking monitoring to automated safety actions, as well as tools for the monitoring of safety measures themselves.

3. Fairness Issue Detection

Competing fairness objectives and frameworks can involve difficult tradeoffs or incompatibilities and there is no consensus measure of fairness. At the same time, observing fairness issues in an actionable, clear and timely fashion would help facilitate more collaborative discussions around appropriate remedies, motiviate speedy corrections, reduce potential aggregate harm, and provide greater accountability. We invite proposals that actively monitor for potential fairness issues, but also welcome work that specifies proposed fairness goals, measures and tradeoffs.

4. Interpretability and Explainability

The opacity of ML systems often adds weight to distrust in how systems are operating and can obscure potentially unrecognized harm. Automated monitoring that can uncover and describe the relative interpretability and defensibility of the learning patterns of ML systems can allow explainable systems to be recognized and promoted, and encourage opaque systems to be simplified or rebuilt to enhance trust. We welcome proposals for tools that provide interpretability and understandable explanations to ML decisions.

5. Stability

Systems that fail for users whose data is uncommon may create fairness risks for such individuals, and privacy leakage when those failures are observable. Analysis by fuzzing—whereby a continuously generated random stream of new files and edge cases are fed into a system to attempt to provoke errors—has proven to be one of the most powerful tools for building resilient operating systems, browsers and cloud environments, yet the same approach has made fewer inroads to large scale ML deployed systems. We are interested in proposals that can increase the stability of ML systems through continuous testing, such as fuzzing or other forms of automated analysis.

6. Robustness

Reliable, accurate, replicable, auditable performance, constrained to a delimited purpose, are all hallmarks of a robust system. We are interested in tools that can monitor these dimensions, especially in cross-cutting ways that monitor the overarching integrity of performance.


Requirements

Proposals should include

  • A summary of the project (1–2 pages), in English, explaining the area of focus, a description of techniques, any relevant prior work, and a timeline with milestones and expected outcomes.
  • A draft budget description (1 page) including an approximate cost of the award and explanation of how funds would be spent
  • Curriculum Vitae for all project participants.
  • Organization details; this will include tax information and administrative contact details.

Eligibility

  • Proposals must comply with applicable U.S. and international laws, regulations, and policies.
  • Applicants must be current full-time faculty at an accredited academic institution that awards research degrees to PhD students.
  • Applicants must be the Principal Investigator on any resulting award.
  • Facebook cannot consider proposals submitted, prepared, or to be carried out by individuals residing in, or affiliated with, an academic institution located in a country or territory subject to comprehensive U.S. trade sanctions.
  • Government officials (excluding faculty and staff of public universities, to the extent they may be considered government officials), political figures, and politically affiliated businesses (all as determined by Facebook in its sole discretion) are not eligible.

Frequently Asked Questions

Terms & Conditions

Facebook’s decisions will be final in all matters relating to Facebook RFP solicitations, including whether or not to grant an award and the interpretation of Facebook RFP Terms and Conditions. By submitting a proposal, applicants affirm that they have read and agree to these Terms and Conditions.

  • Facebook is authorized to evaluate proposals submitted under its RFPs, to consult with outside experts, as needed, in evaluating proposals, and to grant or deny awards using criteria determined by Facebook to be appropriate and at Facebook’s sole discretion. Facebook’s decisions will be final in all matters relating to its RFPs, and applicants agree not to challenge any such decisions.
  • Facebook will not be required to treat any part of a proposal as confidential or protected by copyright, and may use, edit, modify, copy, reproduce and distribute all or a portion of the proposal in any manner for the sole purposes of administering the Facebook RFP website and evaluating the contents of the proposal.
  • Personal data submitted with a proposal, including name, mailing address, phone number, and email address of the applicant and other named researchers in the proposal may be collected, processed, stored and otherwise used by Facebook for the purposes of administering Facebook’s RFP website, evaluating the contents of the proposal, and as otherwise provided under Facebook’s Privacy Policy.
  • Neither Facebook nor the applicant is obligated to enter into a business transaction as a result of the proposal submission. Facebook is under no obligation to review or consider the proposal.
  • Feedback provided in a proposal regarding Facebook products or services will not be treated as confidential or protected by copyright, and Facebook is free to use such feedback on an unrestricted basis with no compensation to the applicant. The submission of a proposal will not result in the transfer of ownership of any IP rights.
  • Applicants represent and warrant that they have authority to submit a proposal in connection with a Facebook RFP and to grant the rights set forth herein on behalf of their organization. All awards provided by Facebook in connection with this RFP shall be used only in accordance with applicable laws and shall not be used in any way, directly or indirectly, to facilitate any act that would constitute bribery or an illegal kickback, an illegal campaign contribution, or would otherwise violate any applicable anti-corruption or political activities law.
  • Awards granted in connection with RFP proposals will be subject to terms and conditions contained in the unrestricted gift agreement (or, in some cases, other mechanisms) pursuant to which the award funding will be provided. Applicants understand and acknowledge that they will need to agree to these terms and conditions to receive an award.