Publication

Towards A User-Level Understanding of IPv6 Behavior

ACM Internet Measurement Conference (IMC)


Abstract

IP address classification and clustering are important tools for security practitioners in understanding attacks and employing proactive defenses. Over the past decade, network providers have begun transitioning from IPv4 to the more flexible IPv6, and a third of users now access online services over IPv6. However, there is no reason to believe that the properties of IPv4 addresses used for security applications should carry over to IPv6, and to date there has not yet been a large-scale study comparing the two protocols at a user (as opposed to a client or address) level.

In this paper we establish empirical grounding on how both ordinary users and attackers use IPv6 in practice, compared with IPv4. Using data on benign and abusive accounts at a large online platform, we conduct user-centric analyses that assess the spatial and temporal properties of users’ IP addresses, and IP-centric evaluations that characterize the user populations on IP addresses. We find that compared with IPv4, IPv6 addresses are less populated with users and shorter lived for each user. While both protocols exhibit outlying behavior, we determine that IPv6 outliers are significantly less prevalent and diverse, and more readily predicted. We also study the effects of subnetting IPv6 addresses at different prefix lengths, and find that while /56 subnets are closest in behavior to IPv4 addresses for malicious users, either the full IPv6 address or /64 subnets are most suitable for IP-based security applications, with both providing better performance tradeoffs than IPv4 addresses. Ultimately, our findings provide guidance on how security practitioners can handle IPv6 for applications such as blocklisting, rate limiting, and training machine learning models.

Related Publications

All Publications

Privacy in Machine Learning (PriML) Workshop at NeurIPS - November 30, 2021

Characterizing and Improving MPC-based Private Inference for Transformer-based Models

Yongqin Wang, Edward Suh, Wenjie Xiong, Benjamin Lefaudeux, Brian Knott, Murali Annavaram, Hsien-Hsin S. Lee

UAI - July 27, 2021

Measuring Data Leakage in Machine-Learning Models with Fisher Information

Awni Hannun, Chuan Guo, Laurens van der Maaten

BMVC - November 22, 2021

Mitigating Reverse Engineering Attacks on Local Feature Descriptors

Deeksha Dangwal, Vincent T. Lee, Hyo Jin Kim, Tianwei Shen, Meghan Cowan, Rajvi Shah, Caroline Trippel, Brandon Reagen, Timothy Sherwood, Vasileios Balntas, Armin Alaghi, Eddy Ilg

HOTI - November 1, 2021

Scalable Distributed Training of Recommendation Models: An ASTRA-SIM + NS3 case-study with TCP/IP transport

Saeed Rashidi, Pallavi Shurpali, Srinivas Sridharan, Naader Hassani, Dheevatsa Mudigere, Krishnakumar Nair, Misha Smelyanskiy, Tushar Krishna

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy