Piehlmaier.No_comments.Bots_comments_revised.pdf (428.75 kB)
Bot detection in online studies and experiments
Most experimental and online studies in the empirical social sciences rely on online panels from crowdsourcing platforms, such as Amazon Mechanical Turk (MTurk), Prolific, Qualtrics Online Panel, and their lesser known competitors. The key benefit of all of these services is an easy and affordable access to a large pool of diverse participants, a privilege that was previously reserved for globally leading and financially independent universities. However, this newly achieved leveled playing field comes at a cost. Semi- or fully automated response tools, also called bots, decrease data quality and reliability. This case describes how two online studies were conducted on a crowdsourcing platform in anticipation of bot responses. Specifically, the case offers insights into the study design process, the selection of appropriate survey questions and bot traps, as well as the ex-post analysis and filtering of bot responses. Best practices are identified, and potential pitfalls explained. The description should aid readers in designing anticipatory online studies and experiments to increase their data quality, validity, and reliability.
History
Publication status
- Published
File Version
- Accepted version
Journal
SAGE Research MethodsPublisher
SAGEExternal DOI
Department affiliated with
- Strategy and Marketing Publications
Notes
Online ISBN: 9781529601312Full text available
- Yes
Peer reviewed?
- Yes
Legacy Posted Date
2021-10-01First Open Access (FOA) Date
2021-10-07First Compliant Deposit (FCD) Date
2021-10-01Usage metrics
Categories
No categories selectedLicence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC