Think of a collection of people as having a kind of collective mind.  How can you find out what that collective mind believes?

That may sound like a fanciful philosophical question, but it has very real, even urgent applications.  For example the IPCC is a collection of hundreds of scientists, and they put out reports supposedly embodying their consensus position – i.e. what they believe not as individuals but as a body of scientists.

There are of course already various methods for determining what people believe collectively; the IPCC have their own approach.   Such methods have various strengths and weaknesses.  For example, the IPCC approach is claimed to be riddled with political conflict.

A little while back, at Austhink, we came up with an alternative approach, which worked successfully in its first application.  We have used it a number of times since with various organisations, calling it the “Wisdom of Groups” method.

Here is a write-up of the first deployment.

___________________________

A few years back, the National Centre for Education and Training on Addiction and the South Australian Department of Health and Human Services put together a 3 day “Summer School” on the topic of addictions, inequalities and their interrelationships, with a view to providing guidance for policy makers.  They said that 20% of the South Australian budget is used to deal with problems of addiction, so this is a major issue.  They hoped to come up with a kind of Position Statement, which would summarise the consensus, if any, that the group of 50 or so participants reached during the Summer School.

They contacted Austhink hoping that we’d be able to help them with one aspect of it, namely making any debate/discussion/rational deliberation more productive. So initially the idea was that live argument mapping facilitation would be used with the whole group to help them work through some issues. But it became clear that they were open to ideas about how the Position Statement would be developed, and our involvement was increased to one of (a) developing a process for developing a Position Statement representing the group consensus, and (b) helping facilitate the overall Summer School to produce that Statement.

So we suddenly found ourselves faced with a very interesting challenge, which boiled down to:

  1. how do you figure out what, if anything, 50 participants with diverse backgrounds, interests, professional specializations, ideologies etc.. agree on:? i.e., how do you actually come up with a draft Position Statement?
  2. how do you rationally deliberate over that draft?
  3. how do you measure the degree to which the participants do in fact agree with any given aspect of that Statement – i.e., the extent to which the resulting draft Position Statement does in fact represent the consensus of the group?

This challenge has a lot in common with problems of democratic participation, of the sort that Deliberative Polling is intended to deal with.

Our approach, in a nutshell, was this:

Phase 1: Developing a Draft Statement

The first two days were occupied mostly with presentations by a range of experts (this had already been set up by the organizers; we had to work around that). We divided the Position Statement into three categories:

  • Definitions and Values
  • Empirical Realities; and
  • Directions.

At the end of the first day, participants filled out a worksheet asking them to nominate 5 distinct propositions in the Definitions and Values category, propositions which they regarded as true and worth including in any Position Statement. On the second day, they filled out similar workshops for Empirical Realities and Directions. Then, for each category, Paul Monk and I spent a few hours synthesizing the proposed propositions into a set of around 10 candidate statements for inclusion in the Position Statement. This involved sorting them into groups and then extracting the core proposition from each group. So this resulted in a set of about 32 statements. Note, however:

  • This process was democratic, in that it treated everyone’s contribution pretty much equally.
  • Nevertheless, no one of these statements was put forward by a majority of people. It simply wasn’t clear to what extent these statements represented the consensus view of the whole group.
  • Third, and somewhat parenthetically, it is worth noting that, in most cases, it was apparent to Paul and I that most participants had only a very partial and idiosyncratic understanding of the material they had been presented with.   The synthesized statement sets, however, were (in our opinion) very good “takes” on the material.  In other words, unsurprisingly, 50 brains really are a lot better than one (in most cases).  The trouble is synthesizing the thinking of 50 brains.

Phase 2: Deliberation

Half of day three was devoted to deliberating over selected statements, using real-time argument mapping with the Reasonable software. A whole-group session introduced the participants to the approach, and made some progress on a particular issue. In another session there were two groups with separate facilitators; each chose their own issues to debate.

Phase 3: Consensus

In the final phase all participants used an online questionnaire to register their attitudes towards each of the 32 propositions. Each participant was asked, for each statement, to choose Agree/Disagree/Abstain, Include/Exclude/Abstain, and was able to offer comments. The online system automatically and immediately collated the results, producing graphical (bar chart) displays of the level of consensus.

In the final session, the results were reviewed. We found that there was a surprisingly high level of consensus on almost all propositions; that, in other words, the draft Position Statement did in fact represent a consensus opinion of the group. Note also that the Position Statement is accompanied by a precise specification of the extent to which the group in fact thought that each component statement was (a) true, and (b) worth including.

The level of consensus regarding the Position Statement developed through the process is particularly noteworthy in light of the fear, expressed to us prior to the Summer School, that there would be such disagreement between the major groupings of participants (roughly, the “addictions” people, and the “inequalities” people), that there would be literally nothing (or nothing worth saying) that they could agree on.

We think that this technologically-augmented (in two ways: argument mapping software+projection, and online questionnaire) process could well be deployed again in a wide range of contexts in which groups get together and need to figure out what they think on a complex issue; and in particular, need to figure out what, if anything, they can agree on to form a basis for policy.