Monetary incentives in Crowdsourcing platforms
Designing the right incentive structure and pricing policies for workers is the central component of online crowdsourcing platforms, e.g. Mechanical Turk.
- The job requester’s goal is to maximize the utility derived from the task under a limited budget.
- Workers’ goal is to maximize their individual profit by deciding which tasks to perform and at what price.
Yet, current crowdsourcing platforms only offer a limited capability to the requester in designing the pricing policies and often rules of thumb are used to price tasks. This limitation could result in an inefficient use of the requester’s budget or workers becoming disinterested in the task.
Price negotiation with workers
Previous work in this direction [Singer et al., HCOMP’11] has focused on designing online truthful mechanisms in the bidding model. This requires eliciting the true costs experienced by the worker, which can be challenging for such platforms. In this paper, we focus on the posted price model, where workers are offered a take-it-or-leave-it price, which is more easily implemented in online crowdsourcing platforms. Figure-1 shows the way price negotiation happens in our posted price model.
The main challenge in determining the payments is the unknown distribution of the workers’ cost (“cost curve”) F(p), illustrated in Figure 2. This leads to the challenge of trading exploration and exploitation – the mechanism needs to “explore” by experimenting with potentially suboptimal prices and has to “exploit” its learning by offering the price that appears best so far. We cast this problem as a multi-armed bandit (MAB) problem, however under a strict budget constraint B and use the approach of regret minimization in online learning to design our mechanism.
Our mechanism BP-UCB
- We present a novel posted-price mechanism, BP-UCB, for online budgeted procurement, which is guaranteed to be budget feasible, achieves near-optimal utility for the requester and be incentive compatible (truthful) for workers.
- We prove no-regret bounds for it and our analysis yields insights into an explicit separation of the regret in terms of wasted budget through overpayment and rejected offers through underpayment.
We carry out extensive experiments to understand the practical performance of our mechanisms on simulated cost distributions (Figure 3 below). Additionally, to demonstrate the effectiveness of our approach on real world inputs, we carried out a Mechanical Turk study to collect real cost distributions from the workers (Figure 4 below).
For more, see our full paper, Truthful Incentives in Crowdsourcing Tasks using Regret Minimization Mechanisms.
Adish Singla, ETH Zurich
Andreas Krause, ETH Zurich