The Content Policy Research Initiative (CPRI) was launched in early 2019 to engage with the broader research community about the development and enforcement of our Community Standards and how we deal with potentially harmful content on our platforms. We are committed to funding independent research on these issues (read about our first-phase and second-phase awardees) and to sharing information about these policies and processes through a global series of workshops. The first two workshops were held in April 2019 in Washington, DC, and Paris. Summaries of these workshops can be found here. In June, we held workshops in Mexico City and São Paulo, summarized here.
Most recently, we convened meetings in Sydney, Australia, and Auckland, New Zealand. We shared details on how we create policies and processes that help protect diverse voices on the platform while making the platform as safe as possible. Specifically, we discussed content policy and operations; our data transparency efforts (for example, the Community Standards Enforcement Report); our approaches to harmful content, such as hate speech and disinformation; as well as work we have undertaken to effectively prioritize among types of high-severity content.
For a more detailed account of the two workshops, we compiled a list of key questions and concerns from participants, as well as ideas for collaboration that were surfaced during our discussions.
CPRI workshop in Sydney
At the CPRI workshop in the Sydney office, Facebook hosted 19 external researchers. The goals for this session were (a) to share more about our processes and research with this expert community to inform their work, and (b) to identify opportunities for future research collaborations.
Key themes of discussion
An important focus of these discussions is to better inform the research community about Facebook’s approach to content policies, and for external researchers to tell us what additional information about our policies and processes would be helpful as they seek to understand how these issues play out across social media.
Discussions at the Sydney workshop raised a few key themes:
- Data sharing. There is, of course, a great deal of interest among researchers in increased API access as well as alternatives to data sharing, such as providing content examples and other more detailed information on definitions and precedents we reference. There was also agreement around the need to approach any data sharing carefully, but it is clearly a barrier for independent research in this space.
- The External Oversight Board. There was keen interest in learning about the structure, authority, and priorities of the Oversight Board and how it will affect Facebook content policies in the future. Specifically, workshop participants would like to better understand the mechanisms by which its authority will be applied and how to prioritize among the many potential cases.
- Political speech, misinformation and disinformation, and transparency. There was concern around and interest in better understanding our policies regarding politicians’ speech and ads, political speech versus politicians’ speech, and how the intersection of political and commercial speech has been shaped by social media. Relatedly, workshop participants felt that the research community, policymakers, and the public would benefit from having more engagements with Facebook on the topics of misinformation and disinformation.
Research collaboration ideas
One of our goals for the CPRI is to identify opportunities for research collaborations in key areas. During this workshop, we discussed how Facebook can best support the work of external researchers in this field, projects that would be of mutual interest, and information-sharing opportunities.
Specifically, participants suggested the following approaches:
- Capacity building with emerging scholars, especially those from disadvantaged communities. Suggestions included finding other ways to fund scholarships, work experiences, and help with career paths both for technical experts and for social scientists.
- Leveraging Facebook’s convening power for the academic community. Participants suggested we look for ways to host content and connect interdisciplinary communities. They believe we could host content and networks that would benefit researchers through making connections and allowing for more information sharing across geographies and disciplines.
- Tech literacy and community standards education. Participants recommended we work with researchers to translate tech industry language and create engaging materials (infographics, etc.) that would be useful for everyone from policymakers to parents trying to make sure their children engage in safe social media practices.
- The ads library. Participants felt that the ads library has the potential to be a helpful resource for the research community, and they provided recommendations on how to expand it in order to be more impactful. Specifically, they suggested that access to all ads from a group over time would allow researchers to perform deeper analysis of the content and trends.
This was an extremely useful discussion that raised interesting ideas about our role in the research ecosystem. In general, it is clear that we can do more to share the efforts we have underway to mitigate the effects and prevalence of harmful content and be proactive in creating and sharing collaboration opportunities.
CPRI workshop in Auckland
At the CPRI workshop in Auckland, Facebook hosted seven researchers from institutions around New Zealand. This was a half-day workshop focused on information sharing around our policies, processes, and enforcement mechanisms, with a particular look at content from dangerous organizations and individuals.
Key questions, concerns, and research collaboration ideas
We had a wide-ranging conversation around these themes and identified the following areas of particular interest for further discussion and partnership:
- Christchurch and catching adversarial behavior. The top concern raised by participants is how to more quickly, accurately, and effectively we can detect content such as the videos of the Christchurch terrorist attacks. They would like for us to help the research community better understand the individuals who abuse or attempt to misuse social media, understand how sophisticated they are, and work with us to identify other signals that would flag harmful content.
- High-impact, low-prevalence content. Relatedly, participants recognized that some of the content with the most potential for harm is relatively rare on social media, and therefore it is hard to utilize machine learning or pull meaningful statistics on user interactions. They are interested in our work to prioritize within this high-severity content and in collaborations on counterspeech and other interventions.
- Definitional and jurisdictional issues. Participants were interested in the challenges we face with enforcement at scale. They’d like to have more information about how we differentiate by market context and see clearer and more detailed definitions incorporated into the Community Standards.
- Education and digital literacy. In addition to clearer definitions, participants recommended we undertake additional educational efforts — both for policymakers and for the general public — on our policies and processes, including opening up better avenues for user feedback. They would be interested in understanding how to give users more decisional power and in how the Oversight Board might help improve users’ representation in policy development.
- Power dynamics in our content policies. Participants appreciated that we are considering how power dynamics and political context interact with our policies (e.g., with graphic content when it is important for understanding current events). However, since it is often unclear how these quickly evolving circumstances will affect our moderation decisions, they suggested clarifying the overarching values that guide our policies, which echoes comments from previous workshops.
This conversation was a great opportunity for us to hear from researchers on specific ways we can do more to engage with them and to identify issues where there is the most potential for working together. The feedback from these two workshops as well as the previous events in 2019 will inform our approach for the Content Policy Research Initiative in 2020.