Publication

Scaling Static Analyses at Facebook

Communications of the ACM (CACM)


Abstract

Static analysis tools are programs that examine, and attempt to draw conclusions about, the source of other programs, without running them. At Facebook we have been investing in advanced static analysis tools that employ reasoning techniques similar to those from program verification. The tools we describe (Infer and Zoncolan) target issues related to crashes and to the security of our services, they perform sometimes complex reasoning spanning many procedures or files, and they are integrated into engineering workflows in a way that attempts to bring value while minimizing friction. They run on all code modifications, participating as bots during the code review process. Infer targets our mobile apps as well as our backend C++ code, codebases with 10s of millions of lines; it has seen over 100 thousand reported issues fixed by developers before code reaches production. Zoncolan targets the 100 million lines of Hack (typed PHP) code, and is additionally integrated in the workflow used by security engineers; it has led to thousands of fixes of security and privacy bugs, outperforming any other detection method used at Facebook for such vulnerabilities. We describe the human and technical challenges encountered and lessons we have learned in developing and deploying these analyses.

There has been a tremendous amount of work on static analysis, both in industry and academia, and we will not attempt to survey that material here. Rather, we present our rationale for, and results from, using techniques similar to ones that might be encountered at the edge of the research literature, not only simple techniques which are much easier to make scale. We intend that this should complement other reports on industrial static analysis and formal methods (e.g., [17, 6, 1, 13]), and hope that such perspectives can provide input both to future research and to further industrial use of static analysis.

We continue in the next section by discussing the three dimensions (bugs that matter, people, and actioned/missed bugs) that drive our work. The rest of the paper describes our experience developing and deploying the analyses, their impact, and the techniques that underpin our tools.

Related Publications

All Publications

ISAAC - December 5, 2021

On the Extended TSP Problem

Julián Mestre, Sergey Pupyrev, Seeun William Umboh

ICML - July 24, 2021

Ditto: Fair and Robust Federated Learning Through Personalization

Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith

International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE) - September 26, 2021

Behavioural and Structural Imitation Models in Facebook’s WW Simulation System

John Ahlgren, Kinga Bojarczuk, Inna Dvortsova, Mark Harman, Rayan Hatout, Maria Lomeli, Erik Meijer, Silvia Sapora

ESEM - September 23, 2021

Measurement Challenges for Cyber Cyber Digital Twins: Experiences from the Deployment of Facebook’s WW Simulation System

Kinga Bojarczuk, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Maria Lomeli, Simon Mark Lucas, Erik Meijer, Rubmary Rojas, Silvia Sapora

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy