Dear colleagues,

Many of us are working with learner texts for which we would like to have valid and reliable proficiency measures. However, getting a learner corpus assessed by several trained raters is extremely costly.

The aim of the Crowdsourcing Language Assessment Project (CLAP) is to investigate whether crowdsourcing can offer practical solutions to the time and cost difficulties associated with proficiency assessment of learner texts for research purposes.

We would like to solicit the help of colleagues and students to complete a comparative judgment task. If you agree to help us, we will ask you to choose the ‘best’ text between a pair of texts, ideally repeating the task to reach a min of 15 decisions. Adaptive comparative judgment is based on the assumption that that comparing two performances is easier and more reliable than assigning a score to an individual performance (see more info about the method and the project here).

The current task focuses on the assessment of L2 English but it is our hope that, if results are satisfactory, the approach could also be used to add proficiency assessment measures to a large variety of learner corpora for different languages. Don’t hesitate to get in touch with us if you are interested in collaborating!

Please try CLAPing at

We thank you very much for your help!

With very best wishes,

Magali Paquot, Alexander König, Rachel Rubin, & Nathan Vandeweerd