Publication

Scaling Static Analyses at Facebook

Communications of the ACM (CACM)


Abstract

Static analysis tools are programs that examine, and attempt to draw conclusions about, the source of other programs, without running them. At Facebook we have been investing in advanced static analysis tools that employ reasoning techniques similar to those from program verification. The tools we describe (Infer and Zoncolan) target issues related to crashes and to the security of our services, they perform sometimes complex reasoning spanning many procedures or files, and they are integrated into engineering workflows in a way that attempts to bring value while minimizing friction. They run on all code modifications, participating as bots during the code review process. Infer targets our mobile apps as well as our backend C++ code, codebases with 10s of millions of lines; it has seen over 100 thousand reported issues fixed by developers before code reaches production. Zoncolan targets the 100 million lines of Hack (typed PHP) code, and is additionally integrated in the workflow used by security engineers; it has led to thousands of fixes of security and privacy bugs, outperforming any other detection method used at Facebook for such vulnerabilities. We describe the human and technical challenges encountered and lessons we have learned in developing and deploying these analyses.

There has been a tremendous amount of work on static analysis, both in industry and academia, and we will not attempt to survey that material here. Rather, we present our rationale for, and results from, using techniques similar to ones that might be encountered at the edge of the research literature, not only simple techniques which are much easier to make scale. We intend that this should complement other reports on industrial static analysis and formal methods (e.g., [17, 6, 1, 13]), and hope that such perspectives can provide input both to future research and to further industrial use of static analysis.

We continue in the next section by discussing the three dimensions (bugs that matter, people, and actioned/missed bugs) that drive our work. The rest of the paper describes our experience developing and deploying the analyses, their impact, and the techniques that underpin our tools.

Related Publications

All Publications

PLDI - June 25, 2021

Developer and User-Transparent Compiler Optimization for Interactive Applications

Paschalis Mpeis, Pavlos Petoumenos, Kim Hazelwood, Hugh Leather

DSN - June 21, 2021

Near-Realtime Server Reboot Monitoring and Root Cause Analysis in a Large-Scale System

Fred Lin, Bhargav Bolla, Eric Pinkham, Neil Kodner, Daniel Moore, Amol Desai, Sriram Sankar

ISCA - June 14, 2021

Enabling Compute-Communication Overlap in Distributed Deep Learning Training Platforms

Saeed Rashidi, Matthew Denton, Srinivas Sridharan, Sudarshan Srinivasan, Amoghavarsha Suresh, Jade Nie, Tushar Krishna

MLSys - May 19, 2021

TT-Rec: Tensor Train Compression For Deep Learning Recommendation Model Embeddings

Chunxing Yin, Bilge Acun, Xing Liu, Carole-Jean Wu

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy