Using a web survey experiment, this study examines measurement comparability between two radio button questions (fully labelled and endpoint labelled) with slider questions. The slider question is unique to web surveys, displaying a horizontal or vertical line with a bar on the line. Respondents need to click and drag the bar to the desired position on the line in order to register their answers. The study described in this paper found that mean scores, break-off rates, time to complete, reliability and respondents’ evaluations are similar across question types, but that the item non-response rate for slider questions is significantly higher than for the radio buttons. In a second experiment, the direction of slider (positive–negative vs negative–positive) is compared. With few exceptions, all measures, including the mean scores, break-off rates, item non-response rates, time to complete, reliability and respondents’ evaluations are similar between the two directions. The implications and limitations of this study are also discussed.