Review Article
Creation of Reliable Relevance Judgments in Information Retrieval Systems Evaluation Experimentation through Crowdsourcing: A Review
Table 5
Worker types based on their average precision.
| Workers | Description |
| Proper | Completed tasks precisely | Random spammer | Gave a worthless answer | Semirandom spammer | Answered incorrectly on most questions while answering correctly on few questions, hoping to avoid detection as a spammer | Uniform spammer | Repeated answers | Sloppy | Not precise enough in their judgments |
|
|