Publication

Evaluating Changes to Fake Account Verification Systems

International Symposium on Research in Attacks, Intrusions and Defenses (RAID)


Abstract

Online social networks (OSNs) such as Facebook, Twitter, and LinkedIn give hundreds of millions of individuals around the world the ability to communicate and build communities. However, the extensive user base of OSNs provides considerable opportunity for malicious actors to abuse the system, with fake accounts generating the vast majority of harmful actions and content. Social networks employ sophisticated detection mechanisms based on machine-learning classifiers and graph analysis to identify and remediate the actions of fake accounts. Disabling or deleting these detected accounts is not tractable when the number of false positives (i.e., real users disabled) is significant in absolute terms. Using challenge-based verification systems such as CAPTCHAs or phone confirmation as a response for detected fake accounts can enable erroneously detected real users to recover their access, while also making it difficult for attackers to abuse the platform.

In order to maintain a verification system’s effectiveness over time, it is important to iterate on the system to improve the real user experience and adapt the platform’s response to adversarial actions. However, at present there is no established method to evaluate how effective each iteration is at stopping fake accounts and letting real users through. This paper proposes a method of assessing the effectiveness of experimental iterations for OSN verification systems, and presents an evaluation of this method against human- labelled ground truth data using production Facebook data. Our method reduces the volume of necessary human labelled data by 70%, decreases the time necessary for classification by 81%, has suitable precision/recall for making decisions in response to experiments, and enables continuous monitoring of the effectiveness of the applied experimental changes.

Related Publications

All Publications

UAI - July 27, 2021

Measuring Data Leakage in Machine-Learning Models with Fisher Information

Awni Hannun, Chuan Guo, Laurens van der Maaten

BMVC - November 22, 2021

Mitigating Reverse Engineering Attacks on Local Feature Descriptors

Deeksha Dangwal, Vincent T. Lee, Hyo Jin Kim, Tianwei Shen, Meghan Cowan, Rajvi Shah, Caroline Trippel, Brandon Reagen, Timothy Sherwood, Vasileios Balntas, Armin Alaghi, Eddy Ilg

NeurIPS - December 6, 2021

Antipodes of Label Differential Privacy: PATE and ALIBI

Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr

arXiv - October 28, 2021

Privacy Preserving Inference on the Ratio of Two Gaussians Using (Weighted) Sums

Jingang Miao, Yiming Paul Li

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy