Publication

IVD: Automatic Learning and Enforcement of Authorization Rules in Online Social Networks

IEEE Symposium on Security and Privacy (IEEE S&P)


Abstract

Authorization bugs, when present in online social networks, are usually caused by missing or incorrect authorization checks and can allow attackers to bypass the online social network’s protections. Unfortunately, there is no practical way to fully guarantee that an authorization bug will never be introduced—even with good engineering practices—as a web application and its data model become more complex. Unlike other web application vulnerabilities such as XSS and CSRF, there is no practical general solution to prevent missing or incorrect authorization checks.

In this paper we propose Invariant Detector (IVD), a defense-in-depth system that automatically learns authorization rules from normal data manipulation patterns and distills them into likely invariants. These invariants, usually learned during the testing or pre-release stages of new features, are then used to block any requests that may attempt to exploit bugs in the social network’s authorization logic. IVD acts as an additional layer of defense, working behind the scenes, complementary to privacy frameworks and testing.

We have designed and implemented IVD to handle the unique challenges posed by modern online social networks. IVD is currently running at Facebook, where it infers and evaluates daily more than 200,000 invariants from a sample of roughly 500 million client requests, and checks the resulting invariants every second against millions of writes made to a graph database containing trillions of entities. Thus far IVD has detected several high impact authorization bugs and has successfully blocked attempts to exploit them before code fixes were deployed.

Related Publications

All Publications

Privacy in Machine Learning (PriML) Workshop at NeurIPS - November 30, 2021

Characterizing and Improving MPC-based Private Inference for Transformer-based Models

Yongqin Wang, Edward Suh, Wenjie Xiong, Benjamin Lefaudeux, Brian Knott, Murali Annavaram, Hsien-Hsin S. Lee

UAI - July 27, 2021

Measuring Data Leakage in Machine-Learning Models with Fisher Information

Awni Hannun, Chuan Guo, Laurens van der Maaten

BMVC - November 22, 2021

Mitigating Reverse Engineering Attacks on Local Feature Descriptors

Deeksha Dangwal, Vincent T. Lee, Hyo Jin Kim, Tianwei Shen, Meghan Cowan, Rajvi Shah, Caroline Trippel, Brandon Reagen, Timothy Sherwood, Vasileios Balntas, Armin Alaghi, Eddy Ilg

NeurIPS - December 6, 2021

Antipodes of Label Differential Privacy: PATE and ALIBI

Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy