Crowd Intelligence Enhances Automated Mobile Testing

Automated Software Engineering Conference (ASE)


We show that information extracted from crowdbased testing can enhance automated mobile testing. We introduce POLARIZ, which generates replicable test scripts from crowd-based testing, extracting cross-app ‘motif’ events: automatically-inferred reusable higher-level event sequences composed of lower-level observed event actions. Our empirical study used 434 crowd workers from Mechanical Turk to perform 1,350 testing tasks on 9 popular Google Play apps, each with at least 1 million user installs. The findings reveal that the crowd was able to achieve 60.5% unique activity coverage and proved to be complementary to automated search-based testing in 5 out of the 9 subjects studied. Our leave-one-out evaluation demonstrates that coverage attainment can be improved (6 out of 9 cases, with no disimprovement on the remaining 3) by combining crowdbased and search-based testing.

Related Publications

All Publications

TSE - June 29, 2021

Learning From Mistakes: Machine Learning Enhanced Human Expert Effort Estimates

Federica Sarro, Rebecca Moussa, Alessio Petrozziello, Mark Harman

IEEE ICIP - September 19, 2021

Rate Estimation Techniques for Encoder Parallelization

Gaurang Chaudhari, Hsiao-Chiang Chuang, Igor Koba, Hariharan Lalgudi

RecSys - September 27, 2021

Jointly Optimize Capacity, Latency and Engagement in Large-scale Recommendation Systems

Hitesh Khandelwal, Viet Ha-Thuc, Avishek Dutta, Yining Lu, Nan Du, Zhihao Li, Qi Huang

MLSys - June 9, 2021

Value Learning for Throughput Optimization of Deep Neural Networks

Benoit Steiner, Chris Cummins, Horace He, Hugh Leather

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy