September 23, 2019

Summaries of the first Content Policy Research Initiative workshops

By: Meta Research

There is an ongoing global conversation about how social media and technology companies decide and enforce what is and isn’t allowed on their platforms. To this end, Facebook has a set of Community Standards that are designed to balance the need for a safe place to share experiences with the responsibility to provide space where everyone can give voice to different points of view.

To help support our goal of fostering a global community of researchers interested in understanding the implementation of Community Standards, Facebook hosted the first two Content Policy Research Initiative (CPRI) workshops in Washington, DC (April 3–4) and Paris (April 11–12).

This series of workshops is designed to enable research on how to design more effective content policies and how to mitigate harmful online content — both in partnership with Facebook and independently. In order to ground our conversations in an understanding of the current state of play, Facebook shared information on a number of our policies, processes, and programs, as well as learnings from some of our internal research.

Presentations from Facebook at both workshops included talks about content policy and operations, and how the Product, Operations, and Content Policy teams at Facebook work together to improve and enforce policies governing what is allowed on our platforms. We also presented our data transparency efforts (Community Standards Enforcement Report) about how Facebook tracks progress on our enforcement of Community Standards across violation types, including the prevalence of Community Standards violations on the platform, the volume of violating content we remove, and how much of that content we proactively detect before anyone reports it to us.

In addition, we presented some of our internal research, which included discussions on both hate speech and dangerous organizations. The hate speech presentation explained the overall framework we use to think about hate speech on our platforms, as well as some cross-country comparative research in understanding hateful speech in civic and social contexts. The dangerous organizations presentation described efforts to qualitatively and quantitatively understand how we review and assess whether terrorism-related content is violating. We also discussed future work in the counter violent extremism space, specifically our role in the Global Internet Forum to Counter Terrorism.

For a more detailed account of the two workshops, we compiled a list of key questions and concerns from participants, as well as the research collaboration ideas that were generated.

CPRI workshop in Washington, DC

At the first CPRI workshop in the DC office, Facebook hosted 23 external researchers — academic and think tank experts. The goals for this session were (a) to share more about our processes and research with this expert community to inform their work, and (b) to identify opportunities for future research collaborations.

Key questions and concerns from participants

An important focus of these discussions is to better inform the broader research community about Facebook’s approach to content policies, and for external researchers to tell us what additional information about our policies and processes would be helpful as they seek to understand how these issues play out across social media.

Specifically, our participants in DC asked the following questions:

  • What are the research foundations that inform Facebook’s policies and priorities? Participants find it understand where and from what body of work Facebook derives the normative judgments that underlie its policies and decision-making. A better articulation of our values and prioritization would help researchers understand tradeoffs in our policies.
  • What collaborative model could mitigate the logistical challenges of industry-academia collaboration? There is an inherent disparity between the tempo of academic work and the tempo at which Facebook needs to adapt and make decisions. Finding a way to balance the need to move fast to address a broad spectrum of threats and the need to ensure that these actions are informed by rigorous analysis will be critical for enabling successful partnerships.
  • Would it be possible for Facebook to be more transparent about existing research collaborations? Telling the story of how these efforts have evolved over time would be helpful context when proposing new research partnerships.
  • Is there anywhere to go for information about how these issues are treated across platforms? For the research community, access to a compilation of industry-wide insights (such as what companies agree on and where they differ in their assessments and approaches) would substantively inform their research.
  • Can Facebook be clearer when it decides not to share details or additional information because bad actors are gaming the system? Being more transparent about the topical areas or specific instances that create safety or security concerns helps researchers know that there is a reason for Facebook’s decisions, and that there are real consequences of simply releasing data, information, policy actions, and so on.
  • How does Facebook work with government and law enforcement around the world? How might that differ between countries/regions? For researchers working to understand hate organizations and terrorist groups in particular, it would be helpful to better understand how Facebook policies and local laws interact (or don’t) and why.

Research collaboration ideas

One of our goals for the CPRI is to identify opportunities for research collaborations in key areas. During this workshop, we discussed how Facebook can best support the work of external researchers in this field, projects that would be of mutual interest, and information sharing opportunities.

Specifically, participants suggested the following ideas:

  • More opportunities for formalized consultations regarding how to apply social science insights systematically across policies and analysis. Some ideas included:
    • Working groups on key policy development and implementation issues, potentially led by academics embedded with Facebook, to apply social science thinking and domain expertise to real-time decisions
    • A council or body that would provide an opportunity for external researchers to advise Facebook on what is most meaningful within the data that we can collect to help inform logging and other measurement prioritization decisions
  • Clear guidance to inform and support research collaboration efforts and also to help researchers understand priority questions and topical areas. Some approaches to doing this might include:
    • Standing RFPs in the content policy space, with top-priority issues regularly updated (the review cycles could be published, allowing rolling submissions)
    • A more systematic way to solicit research questions that could be best answered with Facebook data, examining those questions internally, and sharing trends or aggregated results with the relevant external researchers and stakeholders
    • A document, page, or mechanism by which we add transparency around what different Facebook teams do and the contacts that external stakeholders can reach out to with specific questions or on specific topics
  • Development of additional information, metrics, or aggregate data. These could include:
    • Information on fake account behavior
    • Information on repeat offender behavior
    • More descriptive language or examples of violating content, so that external researchers can better grapple with what constitutes a violation of the Community Standards
    • Group-level information on violating content
  • Build better structures to facilitate internal-external collaborations, such as:
    • Examining the incentive structure for academics to work with Facebook (such as the ability to publish) as well as for internal researchers to invest time in external collaborations
    • “Matchmaking” between external researchers and internal teams
    • Consider supporting undergrad research labs (such as the one recently established at American University) with funding and, more important, with research questions and the opportunity to present their findings

We hope this was just the start of a dialogue. Our goal is to build a small research community interested in understanding and studying Facebook policies both to produce research that may better inform our policies and implementation and to serve as external voices in discussion about how and when we make different policy and integrity decisions.

CPRI workshop in Paris

At the CPRI workshop in the Paris office, Facebook hosted 14 external researchers from around Western Europe (France, the U.K., Germany, Ireland, Spain, Italy, and Switzerland) and Israel. This workshop focused specifically on issues related to hate speech and preventing offline harm from dangerous organizations and individuals. Like the DC workshop, it aimed (a) to share more about Facebook’s processes and research with this expert community to inform their work, and (b) to identify opportunities for future research collaborations.

Key questions, concerns, and suggestions from participants

Participants in Paris asked the following questions:

  • How does Facebook interact in the broader ecosystem, including sharing information with governments and other tech companies? Participants asked about this from the perspective of wanting to see more collaboration with governments on questions of terrorism and terrorist identification, and also because they have concerns about user privacy and safety. Facebook was able to share more about the broader transparency hub and general information that Facebook releases on these issues. One concrete request from the group was that we do more to articulate when we see connections between social media activity and offline behaviors (e.g., when groups move between platforms and when behaviors look like they are moving to offline coordination).
  • How do we reconcile various policy principles, like equity and safety? This echoed comments from the CPRI workshop in Washington, DC, in which researchers wanted more explicit definitions of the principles upon which we base policies and a clearer explanation of the process to balance these principles when they come into conflict.
  • How do we define behavioral violations or more nuanced content issues in our rules and processes? The group was interested in understanding the broader set of rules and norms that govern Facebook beyond the Community Standards. This included ranking decisions and product resource prioritization, as well as definitions of key concepts like propaganda and misinformation.

Research collaboration ideas

Participants in the Paris discussion suggested the following ideas:

  • A more transparent and multifaceted process to undertake partnered research and assessment. Suggestions in this area included:
    • Partnered research on how to measure harm (content exposure-related)
    • Adversarial simulations (like red team exercises) along with scenario-based practical exercises to test Facebook response, build credible assessment criteria, and help incorporate expertise into Facebook policy and product design process
  • Link external data, information, and frameworks explicitly to Facebook policies and processes:
    • Map internationally established norms onto Facebook’s hate speech policies to find alignment with the existing body of law
    • Explore whether certified, trusted, and vetted partners that already monitor potential harmful actors (e.g., terrorist group supporters or potentially radicalizing individuals) can report concerning profiles and posts directly to Facebook
    • Use the community standards log to look at how adversarial behavior adapts to changes in the rules and enforcement capability
  • Explicit requests for information or aggregate data including requests to:
    • Publish or share more case studies with external researchers (driven by more specific requests from the research community)
    • Additional breakdowns of the transparency report (CSER) by region, country, and other relevant factors
    • Sharing aggregate data on user reporting to develop reporter typologies and personas and to improve tooling and education around reporting tools
  • Specific research topics that Facebook could pursue internally and publish or conduct in partnership with external researchers that would be of broader research community interest, such as:
    • What type of news content is most likely to trigger hate speech, to help provide resources to journalists
    • The overlap between nonviolent extremism and terrorism, to map the radicalization pathway and design interventions
    • An analysis of the most recent posts and public info of ISIS supporters, to find patterns and signals

We saw many similar concerns and some overlap in ideas generated at this event and at the previous workshop in DC, but this group focused more specifically on the interplay of Facebook’s governance role in public discourse with national and international governing bodies, human rights concerns, and the need to create a taxonomy around these issues that ties back to established dialogues.

These valuable insights and suggestions will help us as we build a strategy for increased engagement with the research community. Hearing from this group about which information gaps have the greatest impact allows us to design research partnerships and future events to fill them in. To continue the momentum generated by these events, we will be hosting additional workshops and announcing a new funding opportunity in the coming months.