Prediction markets can be a remarkably effective way to divine the wisdom of crowds.
Prediction markets of course only work for predictions – or more generally for what I call “verifiable” questions. A verifiable question is one for which it is possible, at some point, to determine the answer definitively. For example, predicting the winner of the Oscar for best picture. This is what allows the prediction market to determine how much each player wins or loses.
The problem is that many issues we want addressed are not verifiable in this sense.
For example, decisions. Would it be better to continue to negotiate with Iran over its nuclear ambitions, or should a military strike be launched? We can speculate and debate about this, but we’ll never know the answer for sure, because one path will be taken, and the other never taken, and so we’ll never know what would have happened had we taken the other path.
Wouldn’t it be good if we had something like a prediction market, but which works for non-verifiable issues?
Amazon.com book ratings are an interesting case. Whether a book is or is not a good one is certainly a non-verifiable issue. Yet Amazon has created a mechanism for combining the views of many people into a single collective verdict, e.g. 4.5 stars. At one level the system is just counting votes; Amazon users vote by choosing a numerical star level, and Amazon averages these. But note that Amazon’s product pages also allow users to make comments, and reply to comments; and these comment streams can involve quite a lot of debate. It is plausible that, at least sometimes, a user’s vote is influenced by these comments. So the overall rating is at least somewhat influenced by collective deliberation over the merits of the book.
Amazon’s mechanism is an instance of a more general class, for which I’ve coined the term “deliberative aggregator“. A deliberative aggregator has three key features:
- It is some kind of virtual forum, thereby allowing large-scale, remote and asynchronous participation.
- It supports deliberation, and its outputs in some way depend on or at least are influenced by that deliberation. (That’s what makes it “deliberative.”)
- It aggregates data of some kind (e.g. ratings) to produce a collective viewpoint or judgement.
YourView is another example of a deliberative aggregator. Yourview’s aggregation mechanism (currently) is to compute the “weighted vote,” i.e. the votes of users weighted by their credibility, where a user’s credibility is a score, built up over time, indicating the extent to which, in their participation on YourView, they have exhibited “epistemic virtues,” i.e. the general traits of good thinkers.
Many other kinds of deliberative aggregators would be possible. An interesting theoretical question is: what is the best design for a deliberative aggregator? And more generally: what is the best way to discern collective wisdom for non-verifiable questions?