Feeds:
Posts
Comments

Archive for the ‘Wisdom of Crowds’ Category

Australia is patently unsustainable in many ways, and so will have to change.  Will this change be wisely and pro-actively managed?  Or will it be forced on us in unwelcome, disruptive and possibly catastrophic ways?

Wise management will require governments at all levels to make lots of difficult decisions, and to make them expeditiously.

In this decision making, public opinion is a critical constraint.

For example, there is a good case for road use pricing to manage our unsustainable dependence on use of private vehicles.  Yet this option is instantly dismissed by both major political parties, fearing a public backlash – no matter how ill-informed, short-sighted or self-serving that public reaction may be.  Meanwhile our cities become increasingly gridlocked, with escalating economic, health and environmental costs.

Simply put, unless we can improve the relationship between government decision making and public opinion, we’re going to “hit the wall” in numerous respects.

Of course, the importance of public opinion has hardly been lost on sustainability advocates.  There has already been, and continues to be, lots of good work in this area – particularly as regards climate change.  Considerable insight has been gained on topics such as how opinions are formed, how they are related to behavior, and how they can be influenced.

As part of this effort, we must also develop better ways to find out what the public opinion is, i.e. what the public actually thinks.

But what’s the problem?  Don’t we already know pretty much what the public thinks, from the endless stream of opinion polls? And isn’t the problem in fact that there is too much monitoring of public opinion, and that governments are too sensitive to it?

Its true that public opinion, in the standard sense – what might be called the public attitude – is in oversupply.

What we almost never know is the considered opinion of the public – the public wisdom.

Public attitude versus public wisdom

Public opinion, as we usually understand it, is the kind of information generated by the familar polls run by organisations such as Morgan and Gallup and delivered as fodder to the mainstream media.

The public wisdom, by contrast, is the collective, considered opinion of the public.  It is what the public as a whole would think if it were able to think seriously about the matter, i.e. become well-informed, reflect carefully, and somehow pool their thoughts into a coherent position.  Thinking seriously in this way requires collective deliberation, i.e. constructive discussion and debate.

Public opinion falls a long way short of public wisdom.  In his book When The People Speak, notable theorist of democracy James Fishkin has pointed to a number of problems with public opinion:

  • Respondents are generally ill-informed; indeed they will usually be rationally ignorant on the topic.
  • Individuals’ attitudes are subject to manipulation by powerful forces pursuing their own agendas, e.g. major corporates resisting progressive tax reforms.
  • The opinions elicited in standard polls may be artificially manufactured by the polling process itself, i.e. may not reflect any real attitude held by the respondents but rather are generated on the spot in response to the polling process and are shaped by that process.

To which I would add: the respondents will generally not have engaged in any serious deliberation (on their own, or with others) on the issue, and the polling process provides no opportunity for such deliberation.

In short, standard opinion polls give us a distorted snapshot of the attitudes the respondents happen to have at that moment – not a fair reflection of what they (would) think about the issue.

To compound matters, standard polling processes do nothing more than tabulate individual opinions.  They don’t synthesize or aggregate the viewpoints of the respondents into a common or collective position, as would be required for genuine “wisdom of the crowd.”

For an example of genuine collective wisdom, consider the reports of the Intergovernmental Panel on Climate Change (IPCC).  These are generated by means of an elaborate process, involving much high-quality deliberation, in which exceptionally well-informed scientists pool and refine their knowledge, coming up with an agreed expression of what their community as a whole believes.


This post is the first part of a draft chapter What Do We Think? Identifying the Public Wisdom to Guide Sustainability Decisions, in preparation for the volume 20/20 Vision for a Sustainable Society, being put together by the Melbourne Sustainable Society Institute.

Coming up:

Read Full Post »

Much of what Austhink does these days is concerned with “collective wisdom” – the knowledge that a group as a whole has.   As Surowiecki famously pointed out, when the conditions are right, the wisdom of the group can be superior to that of the individuals making it up.

However finding out what that collective wisdom is – identifying, assembling or articulating it – is often an interesting challenge.  Various methods or approaches have been developed, suited to various sorts of groups and types of knowledge.

Not surprisingly, the quality or “grade” of the collective wisdom generated by these various methods can differ markedly.

Here is one way to distinguish some main grades:

  1. Grade 1 (the lowest grade) results simply from statistical enumeration of individual opinions, as happens in, for example, a standard opinion poll, plebiscite or election.   The “wisdom of the crowd” identified by an opinion poll is just the majority opinion.
  2. Grade 2 still results from statistical enumeration of individual opinions, but those opinions have been improved by some appropriate collective process, i.e. they have benefited from some relevant kind of interaction in the group.  In deliberative polling, for example, the collective wisdom is the result of a poll taken after a collective deliberative process in which individuals are presumed to benefit from deliberating with each other.   The group opinion, as reflected in the result of the poll, is better than Grade 1 just insofar as the individual opinions are better as a result of the process.  (Note that there is a whole cluster of interesting issues to do with whether, and under what conditions, group deliberative processes do lead to improved opinions.)
  3. In Grade 3, the collective wisdom is not just aggregated or enumerated individual opinion, but results from some kind of synthesis of individual opinions.  One of the simplest forms this can take is just an averaging, as for example in the well-known Galton “guess the weight of the ox” scenario (as described by Surowiecki).   A more interesting type of collective wisdom at this level is price in some kind of market, including a prediction market.
  4. In Grade 4,  the collective wisdom is a synthesis of individual opinions, plus the collective opinion is endorsed by all or at least most of the individuals.   The outcome of a prediction market, for example, doesn’t make Grade 4 because the current price is deemed by most participants to be “wrong”; some regard it as too high (hence they’re not buying) and others see it as too low (hence they’re not selling).  However IPCC reports do appear to be Grade 4 on this scheme.  They result from a elaborate collaborative process of drafting (synthesizing the information and views of the participating scientists – views which might themselves be improved by the process) and the result is generally endorsed by those scientists.

I doubt that the classification scheme described above is the best/ideal scheme.  How could it be improved?

Indeed, maybe through a collaborative process we could with a scheme which itself consists of “AAA-grade” collective  wisdom (in its own terms).  Wouldn’t that be neat?

Read Full Post »

Two recent publications have important implications for how Boards make decisions.  One is an academic treatise on how information is shared in teams.  A Board is a kind of team, working together to (among other things) make major decisions.  The practice of having a team make the big decisions is based on the idea that teams will, generally, make better decisions than individuals.  This is founded in turn on various assumptions:

  • Good decision making depends in part on taking into proper account relevant information;
  • Teams collectively possess more relevant information than individuals; and
  • Teams share and make use of that information in their deliberations.

    The authors of Information Sharing and Team Performance: A Meta-Analysis focused on this third issue.  They did a comprehensive review of existing studies on how teams share information, making a number of interesting findings.  If we extrapolate those findings to Boards, we can infer:

    • That sharing of information in Board meetings will indeed improve Board decisions.
    • However, Boards will generally not share information as effectively as they could.
    • In particular, Boards will tend to spend their time talking about what everybody already knows, rather than sharing important information that only a few people know.
    • In fact, the more there is a need for information sharing, the less information sharing will actually happen.
    • The more time the Board spends talk, the more they just rehearse what they already know.
    • Boards will share better if they think they are solving some kind of factual issue as opposed to making a judgement requiring consensus.
    • Boards share information better if they use a structured discussion process, rather than just indulging in the usual kind of spontaneous conversation.

    In short, there should be scope for Boards to improve their decisions by changing the way they conduct their discussions so as to promote better sharing of critical information.

    As it happens, a recent piece from McKinsey makes much the same point.  In “Using the crisis to create better boards” in the October 2009 issue of McKinsey Quarterly, the authors zero in on information sharing using structured techniques:

    “Chairmen can expose their boards to new sources of information – such as new performance benchmarks, new customer demands, or new financial perspectives – in many ways.  One involves tapping into the rich experience of nonexecutive and executive directors who also hold external appointments.  Each board member can be asked to share one fresh idea as part of a discussion about the company’s future…”

    The idea of going around the table asking everyone to contribute an idea is hardly very profound or original, and it is curious that leading management consultants, in the pages of the journal of one of the top shelf consulting firms, would be encouraging Boards of top organizations to make use of such a simple technique.   The fact that such a suggestion is seriously being made actually suggests that the issue of poor information sharing, discussed in abstract terms in the academic meta-analysis, is in fact a very real problem at the highest levels.

    Later in their piece, the authors get a little more specific about some of the information that needs to be shared and how to do it:

    Chairmen ought to help their boards…by requesting that all significant proposals come with a “red team” report presenting contrary arguments…the chairman would merely request that the board hear arguments for and against any important proposal.  The CEO would therefore have to think deeply before submitting the proposal, undecided board members could insist on a fuller discussion, and a rival paradigm might see the light of day.

    This suggestion is very much in line with our proposal that organisations improve Board deliberations, and hence decision making, by adopting decision mapping.   Decision maps, by their nature, include “the arguments for and against any important proposal,” though they include such arguments in a wider framework encompassing the overall structure of the decision.

    The McKinsey authors seem to be suggesting – and we would agree – that Boards don’t need more information thrown at them, in the form of door-stopping Board reports or dense PowerPoints.  Rather, they should look to benefit by more effectively sharing with each other the critical information and insights which they may already have, and understanding what difference that information makes to the issue.

    Read Full Post »

    Think of a collection of people as having a kind of collective mind.  How can you find out what that collective mind believes?

    That may sound like a fanciful philosophical question, but it has very real, even urgent applications.  For example the IPCC is a collection of hundreds of scientists, and they put out reports supposedly embodying their consensus position – i.e. what they believe not as individuals but as a body of scientists.

    There are of course already various methods for determining what people believe collectively; the IPCC have their own approach.   Such methods have various strengths and weaknesses.  For example, the IPCC approach is claimed to be riddled with political conflict.

    A little while back, at Austhink, we came up with an alternative approach, which worked successfully in its first application.  We have used it a number of times since with various organisations, calling it the “Wisdom of Groups” method.

    Here is a write-up of the first deployment.

    ___________________________

    A few years back, the National Centre for Education and Training on Addiction and the South Australian Department of Health and Human Services put together a 3 day “Summer School” on the topic of addictions, inequalities and their interrelationships, with a view to providing guidance for policy makers.  They said that 20% of the South Australian budget is used to deal with problems of addiction, so this is a major issue.  They hoped to come up with a kind of Position Statement, which would summarise the consensus, if any, that the group of 50 or so participants reached during the Summer School.

    They contacted Austhink hoping that we’d be able to help them with one aspect of it, namely making any debate/discussion/rational deliberation more productive. So initially the idea was that live argument mapping facilitation would be used with the whole group to help them work through some issues. But it became clear that they were open to ideas about how the Position Statement would be developed, and our involvement was increased to one of (a) developing a process for developing a Position Statement representing the group consensus, and (b) helping facilitate the overall Summer School to produce that Statement.

    So we suddenly found ourselves faced with a very interesting challenge, which boiled down to:

    1. how do you figure out what, if anything, 50 participants with diverse backgrounds, interests, professional specializations, ideologies etc.. agree on:? i.e., how do you actually come up with a draft Position Statement?
    2. how do you rationally deliberate over that draft?
    3. how do you measure the degree to which the participants do in fact agree with any given aspect of that Statement – i.e., the extent to which the resulting draft Position Statement does in fact represent the consensus of the group?

    This challenge has a lot in common with problems of democratic participation, of the sort that Deliberative Polling is intended to deal with.

    Our approach, in a nutshell, was this:

    Phase 1: Developing a Draft Statement

    The first two days were occupied mostly with presentations by a range of experts (this had already been set up by the organizers; we had to work around that). We divided the Position Statement into three categories:

    • Definitions and Values
    • Empirical Realities; and
    • Directions.

    At the end of the first day, participants filled out a worksheet asking them to nominate 5 distinct propositions in the Definitions and Values category, propositions which they regarded as true and worth including in any Position Statement. On the second day, they filled out similar workshops for Empirical Realities and Directions. Then, for each category, Paul Monk and I spent a few hours synthesizing the proposed propositions into a set of around 10 candidate statements for inclusion in the Position Statement. This involved sorting them into groups and then extracting the core proposition from each group. So this resulted in a set of about 32 statements. Note, however:

    • This process was democratic, in that it treated everyone’s contribution pretty much equally.
    • Nevertheless, no one of these statements was put forward by a majority of people. It simply wasn’t clear to what extent these statements represented the consensus view of the whole group.
    • Third, and somewhat parenthetically, it is worth noting that, in most cases, it was apparent to Paul and I that most participants had only a very partial and idiosyncratic understanding of the material they had been presented with.   The synthesized statement sets, however, were (in our opinion) very good “takes” on the material.  In other words, unsurprisingly, 50 brains really are a lot better than one (in most cases).  The trouble is synthesizing the thinking of 50 brains.

    Phase 2: Deliberation

    Half of day three was devoted to deliberating over selected statements, using real-time argument mapping with the Reasonable software. A whole-group session introduced the participants to the approach, and made some progress on a particular issue. In another session there were two groups with separate facilitators; each chose their own issues to debate.

    Phase 3: Consensus

    In the final phase all participants used an online questionnaire to register their attitudes towards each of the 32 propositions. Each participant was asked, for each statement, to choose Agree/Disagree/Abstain, Include/Exclude/Abstain, and was able to offer comments. The online system automatically and immediately collated the results, producing graphical (bar chart) displays of the level of consensus.

    In the final session, the results were reviewed. We found that there was a surprisingly high level of consensus on almost all propositions; that, in other words, the draft Position Statement did in fact represent a consensus opinion of the group. Note also that the Position Statement is accompanied by a precise specification of the extent to which the group in fact thought that each component statement was (a) true, and (b) worth including.

    The level of consensus regarding the Position Statement developed through the process is particularly noteworthy in light of the fear, expressed to us prior to the Summer School, that there would be such disagreement between the major groupings of participants (roughly, the “addictions” people, and the “inequalities” people), that there would be literally nothing (or nothing worth saying) that they could agree on.

    We think that this technologically-augmented (in two ways: argument mapping software+projection, and online questionnaire) process could well be deployed again in a wide range of contexts in which groups get together and need to figure out what they think on a complex issue; and in particular, need to figure out what, if anything, they can agree on to form a basis for policy.

    Read Full Post »

    « Newer Posts