February 2017 | NTIA Technical Memo TM-17-523
A Crowdsourced Speech Intelligibility Test that Agrees with, Has Higher Repeatability than, Lab Tests
Stephen D. Voran; Andrew A. Catellier
Abstract: Crowdsourcing of subjective speech, audio, and video quality of experience (QoE) tests has received much interest and study, but crowdsourcing of speech intelligibility testing has not. We hypothesize that speech intelligibility tests offer a unique crowdsourcing opportunity because, unlike QoE testing, each trial has a correct answer. That allows us to motivate and evaluate listeners. We describe the design, implementation, and analysis of a Crowdsourced Modified Rhyme Test (CMRT) that replicates our recent Laboratory MRT (LMRT) work. Our results show that CMRT results are more repeatable than LMRT results, CMRT repeats LMRT better than LMRT repeats itself, and application of a simple listener selection rule produces per-condition CMRT results that almost exactly agree with reference LMRT results.
Keywords: modified rhyme test (MRT); speech intelligibility; subjective test; crowdsource
For technical information concerning this report, contact:
Stephen D. Voran
Institute for Telecommunication Sciences
(303) 497-3839
svoran@ntia.gov
To request a reprint of this report, contact:
Lilli Segre, Publications Officer
Institute for Telecommunication Sciences
(303) 497-3572
LSegre@ntia.gov
Disclaimer: Certain commercial equipment, components, and software may be identified in this report to specify adequately the technical aspects of the reported results. In no case does such identification imply recommendation or endorsement by the National Telecommunications and Information Administration, nor does it imply that the equipment or software identified is necessarily the best available for the particular application or uses.