Feeds:
Posts
Comments

Archive for the ‘Deliberative Aggregator’ Category

[This post is in response to a request from a colleague for “the best thing to introduce someone to YourView”.]

The goal of the YourView project was to develop a platform for identifying collective wisdom, focusing on major public issues, aiming to remedy some of the defects of democracy.  The platform was a deliberative aggregator, i.e. it supported large-scale deliberation and its aggregated outputs reflected that deliberation.  The project was active from around 2011 to 2014, with its moment of glory being when the platform was used as part of Fairfax Media’s coverage of the 2013 Federal Election.

The YourView platform is still alive and can be explored, though there has been no real activity on the site since 2014.  There is a collection of pages and links about the YourView project.

The best theoretical overview is Cultivating Deliberation for Democracy, particularly the second half.  The “The Zone” interview by Fairfax’s Michael Short is a good read.

Advertisements

Read Full Post »

While in Northern Virginia last week for a workshop on critical thinking for intelligence analysts, I was able to also do a couple of presentations at The Mitre Corporation, a large US federally-funded R&D organisation.  Audience including Mitre researchers and other folks from the US intelligence community.   Thanks to Steve Rieber and Mark Brown for organising.

Here are the abstracts:

Deliberative Aggregators for Intelligence

It is widely appreciated that under appropriate circumstances “crowds” can be remarkably wise.  A practical challenge is always how to extract that wisdom.  One approach is to set up some kind of market.  Prediction markets are an increasingly familiar example.  Prediction markets can be effective but, at least in the classic form, have a number of drawbacks, such as (a) they only work for “verifiable” issues, where the truth of the matter comes to be objectively known at some point (e.g. an event occurs or it doesn’t by a certain date); (b) they encourage participants to conceal critical information from others; and (c) they create no “argument trail” justifying the collective view.

An alternative is a class of systems which can be called “deliberative aggregators.” These are virtual discussion forums, offering standard benefits such remote & asynchronous participation, and the ability to involve large numbers of participants.  Distinctively, however, they have some kind of mechanism for automatically aggregating discrete individual viewpoints into a collective view.

YourView (www.yourview.org.au) is one example of a deliberative aggregator.  On YourView, issues are raised; participants can vote on an issue, make comments and replies, and rate comments and replies.  Through this activity, participants earn credibility scores, measuring of the extent to which they exhibit epistemic virtues such as open-mindedness, cogency, etc.. YourView then uses credibility scores to determine the collective wisdom of the participants on the issue.

Although YourView is relatively new and has not yet been used in an intelligence context, I will discuss a number of issues pertaining to potential such usage, including: how YourView can support deliberation handling complex issue clusters (issues, sub-issues, etc;); and how it can handle open-ended questions, such as “Who is the likely successor to X as leader of country Y?”.

Hypothesis Mapping as an Alternative to Analysis of Competing Hypotheses

The Analysis of Competing Hypotheses is the leading structured analytic method for hypothesis testing and evaluation.  Although ACH has some obvious virtues, these are outweighed by a range theoretical and practical drawbacks, which may partly explain why it is taught far more often than it is actually used.  In the first part of this presentation I will briefly review the most important of these problems.

One reason for ACH’s “popularity” has been the lack of any serious alternative.  That situation is changing with the emergence of hypothesis mapping (HM).  HM is an extension of argument mapping to handle abductive reasoning, i.e. reasoning involving selection of the best explanatory hypothesis with regard to a range of available or potential evidence.  Hypothesis mapping is a software-supported method for laying out complex hypothesis sets and marshalling the evidence and arguments in relation to these hypotheses.

The second part of the presentation will provide a quick introduction to hypothesis mapping using an intelligence-type example, and review some of its relative strengths and weaknesses.

Read Full Post »

Prediction markets can be a remarkably effective way to divine the wisdom of crowds.

Prediction markets of course only work for predictions – or more generally for what I call “verifiable” questions.   A verifiable question is one for which it is possible, at some point, to determine the answer definitively.  For example, predicting the winner of the Oscar for best picture.   This is what allows the prediction market to determine how much each player wins or loses.

The problem is that many issues we want addressed are not verifiable in this sense.

For example, decisions.  Would it be better to continue to negotiate with Iran over its nuclear ambitions, or should a military strike be launched?   We can speculate and debate about this, but we’ll never know the answer for sure, because one path will be taken, and the other never taken, and so we’ll never know what would have happened had we taken the other path.

Wouldn’t it be good if we had something like a prediction market, but which works for non-verifiable issues?

Amazon.com book ratings are an interesting case.   Whether a book is or is not a good one is certainly a non-verifiable issue.   Yet Amazon has created a mechanism for combining the views of many people into a single collective verdict, e.g. 4.5 stars.   At one level the system is just counting votes; Amazon users vote by choosing a numerical star level, and Amazon averages these.   But note that Amazon’s product pages also allow users to make comments, and reply to comments; and these comment streams can involve quite a lot of debate.   It is plausible that, at least sometimes, a user’s vote is influenced by these comments.   So the overall rating is at least somewhat influenced by collective deliberation over the merits of the book.

Amazon’s mechanism is an instance of a more general class, for which I’ve coined the term “deliberative aggregator“.   A deliberative aggregator has three key features:

  1. It is some kind of virtual forum, thereby allowing large-scale, remote and asynchronous participation.
  2. It supports deliberation, and its outputs in some way depend on or at least are influenced by that deliberation.  (That’s what makes it “deliberative.”)
  3. It aggregates data of some kind (e.g. ratings) to produce a collective viewpoint or judgement.

YourView is another example of a deliberative aggregator.   Yourview’s aggregation mechanism (currently) is to compute the “weighted vote,” i.e. the votes of users weighted by their credibility, where a user’s credibility is a score, built up over time, indicating the extent to which, in their participation on YourView, they have exhibited “epistemic virtues,” i.e. the general traits of good thinkers.

Many other kinds of deliberative aggregators would be possible.   An interesting theoretical question is: what is the best design for a deliberative aggregator?  And more generally: what is the best way to discern collective wisdom for non-verifiable questions?

Read Full Post »