by Yiling Chen, Jason Hartline, Yang Liu, Bo Waggoner & Dan Weld
The role of reputation systems in online markets, such as those for crowdwork, is to transform a single-shot game between individual requester and workers into a repeated game between the population of requesters and population of sellers. In the single-shot game cooperation can break down, e.g. workers may provide only low quality service, but in the repeated game cooperation can be sustained, and workers are incented to do high quality work (cf. Kandori, 1992). Reputation systems will fail, however, if the the marginal cost to a requester for providing an informative review of a worker is greater than the marginal benefit of more accurate future ratings.1
The main driver of this inequality is the free-rider problem: requesters benefit from the public good (i.e. worker reputation) created by the reputation system even if they do not contribute to it.
In this blog post we describe two main ideas for making reputation systems more effective. First, to increase the marginal benefit of providing an accurate rating we suggest changing reputation from a public good to a private good by casting it as an individualized (to each requester) recommendation. For example, a requester who gives all workers a top rating is reporting no preference and can be recommended any worker; while a requester with more informative reports will be assigned workers that the system predicts will be preferred by this requester. Second, to decrease the marginal cost of providing uninformative reviews, we suggest linking decisions between a requester’s review of several workers (i.e., ranking the workers relative to each other instead of scoring them absolutely).
The rest of this blog post is organized as follows. In the first section we will discuss some of the reasons that marginal cost of informative reviews may outweigh their marginal benefit. In the second section we describe how revisioning the reputation system as a recommender system. In the third section we describe how linking decisions can make it easier for requesters to give informative reports.
1. Costs and Benefits of Reputation systems.
We believe that ratings are inaccurate because the marginal cost to a requester for providing an informative review is greater than the marginal benefit of more accurate future ratings. There are several costs associated with submitting a review, especially an accurate assessment of a poor worker.
- It takes time to enter a review.
- A review incurs the risk of being embroiled in a dispute
- A negative review may give the requester the reputation of being a ‘harsh grader’ and cause other skilled workers to avoid the requester for fear of damaging their reputation.
- There may be off-platform consequences, such as unfavorable posts on Turkopticon.
By lowering these costs, as we discuss below, an improved reputation system could increase the accuracy of reviews. There are also several reasons why the marginal benefit of a review is low:
- Requester feedbacks are often aggregated into reputations and a single assessment will likely not visibly affect the worker’s rating.
- Even if a single assessment did change the worker’s score, the change provides no new information to the requester, who already knows how this worker performed. (If the rating affected the scores of other workers, however, by personalizing these ratings, then there would be marginal benefit. (Such personalized ratings may be thought of as a private good).
There are several ways to decrease the cost of providing truthful reviews:
- One can reduce the chance of disputes by making reviews anonymous and only showing a worker their average score. This approach is taken by Uber (and others) who only show drivers their average rating for the most recent 500 trips.
- One way to shrink or eliminate the marginal cost of the time to submit a review is to provide a simple UI which is fast while simultaneously making the process of avoiding the review task cumbersome. This approach, and Amazon’s repeated nagging requests for reviews may alienate users, however.
- Alternatively, in lieu of an explicit review, it may be possible to observe other actions performed by the requester and use that to reveal his or her preferences. For example, one could detect if the requester chooses to subsequently hire the worker, which is presumably a very strong signal that previous interactions were positive.
One can also try to eliminate the free-rider problem by increasing the marginal benefit of reviews. Again, several options are possible.
- One way of increasing the marginal value is to personalize recommendations. This method is used in Boomerang by Gaikwad et al. (2016) and we discuss it further in the next section.
- Another method for increasing marginal value of correct reviews is changing overall system behavior in a way that impacts the requester. For example, Boomerang gives temporal priority to well-rated workers on subsequent jobs posted by a requester. This creates a clear disincentive to inflate the grade of a poor worker, but may not provide enough of an incentive for requesters to post ratings at all.
One novel approach that deserves more attention is blurring. In this model, requesters ability to observe the reputation of a worker is proportional to the number and quality of ratings that have provided. If the system supports strong identities then a new requester could be allowed unfettered access to the reputation ratings, but if the requester engaged in transactions and then failed to report an accurate rating, the system would hide information in subsequent rounds (see Figure). Of course such a mechanism requires a way to estimate the quality of a review. One method, which needs further thought and experimental evaluation, would be to measure the entropy of the requester’s reviews (this discourages giving uniformly positive 5 star reviews) and agreement with good reviewers (calculated using EM).
2. Reputation systems as recommender systems.
One approach to solving the free rider problem is to turn the public good into private goods. If reputation of workers is no longer something that every requester can benefit from equally, instead, each requester gets personalized recommendations or matches of workers according to preferences revealed by the provided feedbacks, then the marginal value of providing an accurate assessment is increased. This alleviates the incentive to free ride. For example, if a requester always rates five stars, his revealed preference is that he’s happy with any worker and he hence will be matched with workers that others think are less skilled. This approach not only provides incentives for requesters to report accurate assessments but also accommodates heterogeneity in requester preferences, which can be helpful for increasing expressiveness of reputation systems. In such a system a requester’s feedback is taken not only as an assessment but also as an indication of his preference. The following gives a sketch of how such a system would work.
- Requesters provide reviews of workers in their own terms.
- The requesters’ reviews are normalized and aggregated.
- The aggregated reviews are reinterpreted into an individual recommendations for each requester (according to the requester’s preference inferred from (1)).
There are two key properties of such a system. First, if a requester gives uninformative reviews, e.g., all five-star ratings, then the requester gets back uninformative recommendations, e.g., all five-star ratings. Second, uninformative reviews such as all five-star ratings when normalized and aggregated will not dilute the informative reviews (which would make the aggregate reviews less informative).
The Netflix recommendation system is a canonical example. Users enjoy recommendations based on data contributed from others and their own reviews are used in collaborative filtering to generate personalized recommendations (cf. Ekstrand et al., 2011). Such an approach is based on the idea of decomposing a large while sparse rating matrix to three components: R(rating) = U(Users) Sigma(latent variables) M(movies), namely users, latent variable, and movies. Then we see the above procedure helps identify hidden similarity features that any two users (rating providers) may share. Therefore it is to a user’s best interest (improve accuracy of recommendation) to truthfully reveal his preference.
Collaborative filtering based movie recommendation
Collaborative filtering is a powerful tool to identify the most representative latent dimensions that can best describe the rating matrix. This decomposition to a great extent helps to characterize useful attributes that can best “predict” a future rating (or reputation). While collaborative filtering methods like the one discussed above do not provide a natural labeling of the latent factors, it would perhaps be interesting to combine collaborative filtering with more sophisticated data mining approaches to obtain natural labeling for discovered latent dimensions.
Another benefit of using a collaborative filtering approach is that it can improve the expressiveness of a reputation system. So far most reputation systems rely on a single (and often naive) dimension for scoring agents, e.g., accomplishment rate for AMT. But there exists potentially many other dimensions that help determine such a “reputation score”.
Naturally the above procedure faces challenges that classical recommender systems have. First, the recommendation-based reputation system need to deal with the cold-start problem (cf. Sedhain et al., 2014), when a new worker arrives. This to a certain degree may discourage new users. Second, the accuracy of the above approach depends on various modeling assumptions. When a model works and when it doesn’t are interesting questions for future studies.
2.1 Theoretical Approaches
We provide some initial thoughts on how one can formally approach reputation systems as a kind of recommender systems.
A first question to solve is an offline learning problem: Given reviews acquired so far (say requesters reviewing workers), how can we predict which matches of requesters to workers would be valuable in the future? One could cast this as a matrix completion kind of problem and apply collaborative filtering techniques to predict the “missing entries”, i.e. reviews that would be given for unknown matches (cf. Koren et al., 2009). Another approach would be to apply techniques from crowdsourcing for inferring underlying parameters given responses on tasks (cf. Moon ,1996; Karger et al., 2014). If we can infer parameters for reviewer and worker preferences and skills, perhaps we can use these for prediction.
A next step is to make this problem dynamic. A system obtains new reviews over time and can make or influence future assignments of workers to requesters. There is an explore-exploit tradeoff because the system may benefit in the long run from initially making suboptimal assignments in order to learn about skills and preferences. (In some systems an initial questionnaire could directly elicit preferences.) Here, an interesting challenge is to combine algorithmic approaches such as bandit learning or active learning with the above inference algorithms (cf. Bresler et al., 2015).
Finally, further study of the incentives of recommender systems is warranted. Such a study would need to explicitly model for the utility of a requester in terms of the accuracy of reviews provided and recommendations received. See (cf. Jurca and Faltings, 2003; Dasgupta and Ghosh, 2013) for initial work in this area.
3. Feedback Elicitation
Another approach to reversing the direction of the marginal cost greater than marginal value inequality is to increase the informativeness of elicited feedbacks while maintaining or lowering the marginal cost of providing informative feedback.
In many reputation systems, users are asked to provide a numerical rating or score for sellers or service providers that they have interacted with. If there is no option to opt out of providing a rating, arguably the strategy of always giving a five-star rating without regard to the actual experience is as costless as possible for the users. This strategy however leads to completely uninformative ratings that defies the purposes of eliciting feedbacks.
An alternative approach would be to ask requestors to rank order of the past three workers that he has interacted with. The requester no longer has the option of saying that all workers are excellent and hence is more likely to provide a ranking that’s closer to his true experiences.
There are two reasons why solicitation of rankings may lead to better outcomes than solicitation of scores. The first reason is that humans find ranking easier than scoring (Miller, 1956, 1994). The second reason is that requesters may differ in their perception of the magnitudes of the qualities of workers or may prefer to exaggerate the quality of workers to avoid retribution or costly disputes. For example, Frankel (2014) studied a related delegation problem and showed that under natural assumptions soliciting rankings information is optimal when requesters would otherwise have incentives to misreport scores. More generally, the approach of ranking is related to the “linking decisions” idea in economics (cf. Jackson and Sonnenschein, 2007). When requesters are asked to score workers, they make individual decisions on each worker; but when requesters rank workers, their decisions are linked.
Changing the way to ask for feedbacks also brings up interesting interface questions. While conceivably it may be easier for a requester to compare two workers than score them, how about three workers or five workers? Time also adds additional complications as people may not remember their past experiences well. We think there is an interesting research agenda here trying to understand the impact of interface design on the informativeness vs. cost tradeoff for eliciting feedbacks. The literature on ranking in peer grading may be a useful starting point (cf. Raman and Joachims, 2014).
1 For simplicity of exposition, we speak exclusively of a worker’s reputation, but our ideas also apply to the equally important problem of requestor reputations.
Designing More Informative Reputation Systems was one of the group projects pursued at the CMO-BIRS 2016 WORKSHOP ON MODELS AND ALGORITHMS FOR CROWDS AND NETWORKS.
- Kandori, Michihiro. “Social norms and community enforcement.” The Review of Economic Studies 59.1 (1992): 63-80.
- Horton, John J., Joseph M. Golden, Reputation Inflation: Evidence from an Online Labor Market, 2015. http://econweb.tamu.edu/common/files/workshops/Theory%20and%20Experimental%20Economics/2015_3_5_John_Horton.pdf.
- S.S. Gaikwad, D. Morina, A. Ginzberg, C. Mullings, S. Goyal, D. Gamage, C. Diemert, M. Burton, S. Zhou, M. Whiting, K. Ziulkoski, A. Ballav, A. Gilbee, S.S. Niranga, V. Sehgal, J. Lin, L. Kristianto, A. Richmond-Fuller, J. Regino, N. Chhibber, D. Majeti, S. Sharma, K. Mananova, D. Dhakal, W. Dai, V. Purynova, S. Sandeep, V. Chandrakanthan, T. Sarma, S. Matin, A. Nassar, R. Nistala, A. Stolzoff, K. Milland, V. Mathur, R. Vaish, and M.S. Bernstein (2016) Boomerang: Rebounding the Consequences of Reputation Feedback on Crowdsourcing Platforms. To appear in UIST-16.
- Ekstrand, Michael D., John T. Riedl, and Joseph A. Konstan. “Collaborative filtering recommender systems.” Foundations and Trends in Human-Computer Interaction 4.2 (2011): 81-173.
- Suvash Sedhain, Scott Sanner, Darius Braziunas, Lexing Xie, and Jordan Christensen. 2014. Social collaborative filtering for cold-start recommendations. In Proceedings of the 8th ACM Conference on Recommender systems (RecSys ’14). ACM, New York, NY, USA, 345-348. DOI=http://dx.doi.org/10.1145/2645710.2645772
- Koren, Yehuda, Robert Bell, and Chris Volinsky. “Matrix factorization techniques for recommender systems.” Computer 42.8 (2009): 30-37.
- Moon, Todd K. “The expectation-maximization algorithm.” IEEE Signal processing magazine 13.6 (1996): 47-60.
- Karger, David R., Sewoong Oh, and Devavrat Shah. “Budget-optimal task allocation for reliable crowdsourcing systems.” Operations Research 62.1 (2014): 1-24.
- Bresler et al. “Regret Guarantees for Item-Item Collab FIltering” http://arxiv.org/abs/1507.05371 applies bandit algorithm to choose what rating to ask for
- Jurca, Radu, and Boi Faltings. “An incentive compatible reputation mechanism.” E-Commerce, 2003. CEC 2003. IEEE International Conference on. IEEE, 2003.
- Dasgupta, Anirban, and Arpita Ghosh. “Crowdsourced judgement elicitation with endogenous proficiency.” Proceedings of the 22nd international conference on World Wide Web. ACM, 2013.
- Jackson and Sonnenschein (2007), Overcoming Incentive Constraints by Linking Decisions. Econometrica.
- Frankel (2014), Aligned Delegation. American Economic Review.
- Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2), 81.
- Miller, G. A. (1994). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 101(2), 343.
- Raman, K., & Joachims, T. (2014, August). Methods for ordinal peer grading. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1037-1046). ACM.