Review Article

Creation of Reliable Relevance Judgments in Information Retrieval Systems Evaluation Experimentation through Crowdsourcing: A Review

Table 5

Worker types based on their average precision.

WorkersDescription

Proper Completed tasks precisely
Random spammerGave a worthless answer
Semirandom spammerAnswered incorrectly on most questions while answering correctly on few questions, hoping to avoid detection as a spammer
Uniform spammerRepeated answers
Sloppy Not precise enough in their judgments