Feeds:
Posts
Comments

Archive for the ‘Belief’ Category

About a month ago The Age published an opinion piece I wrote under the title “Do you hold a Bayesian or Boolean worldview?“.  I had submitted it under the title “Madmen in Authority,” and it opened by discussing two men in authority who are/were each mad in their own way – Maurice Newman, influential Australian businessman and climate denier, and Cuban dictator Fidel Castro.  Both men had professed to be totally certain about issues on which any reasonable person ought to have had serious doubts given the very substantial counter-evidence.

Their dogmatic attitudes seemed to exemplify a kind of crude epistemological viewpoint I call “Booleanism,” in contrast with a more sophisticated “Bayesianism”. Here is the philosophical core of the short opinion piece:

On economic matters, Keynes said: “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Similarly, on matters of truth and evidence, we are usually unwittingly beholden to our background epistemology (theory of knowledge), partially shaped by unknown theorists from centuries past.

One such  theory of knowledge we can call Boolean, after the 19th century English logician George Boole.  He was responsible for what is now known as Boolean algebra, the binary logic which underpins the computing revolution.

In the Boolean worldview, the world is organised into basic situations such as Sydney being north of Melbourne. Such situations are facts. Truth is correspondence to facts. That is, if a belief matches a fact, it is objectively true; if not, it is objectively false. If you and I disagree, one of us must be right, the other wrong; and if I know I’m right, then I know you’re wrong. Totally wrong.

This worldview underpins Castro’s extreme confidence.  Either JFK was killed by an anti-Castro/CIA conspiracy or he wasn’t; and if he was, then Castro is 100 per cent right. Who needs doubt?

An alternative  theory of knowledge has roots in the work of another important English figure, the Reverend Thomas Bayes. He is famous for Bayes’ Theorem, a basic law of probability governing how to modify one’s beliefs when new evidence arrives.

In the Bayesian worldview, beliefs are not simply true or false, but more or less probable. That is, we can be more or less confident that they are true, given how they relate to our other beliefs and how confident we are in them. If you and I disagree about the cause of climate change, it is not a matter of me being wholly right and you being wholly wrong, but about the differing levels of confidence we have in a range of hypotheses.

Scientists are generally Bayesians, if not self-consciously, at least in their pronouncements. For example, the IPCC refrains from claiming certainty that climate change is human-caused; it says instead that it has 95 per cent confidence that human activities are a major cause.

Read Full Post »

Tom Toles, US cartoonist, writes in a post called Own Facts:

The main thing is they [Republicans] are in absolute, abject and catastrophic denial about a straightforward set of facts that is probably the most important set of facts we face as a nation, and as human beings on planet earth. They have turned their faces away from climate change in a way that is simply and utterly unforgivable. They now apparently DO feel entitled to their own facts, and they live, campaign and purportedly do their jobs in a zone of outright lies. Lies they have every reason to understand are lies, and lies that will almost certainly result in massive destruction and death. Exactly how would you be “fair” to these people?

In Australia unfortunately we have our share of Republicans.   And we’re much too fair to them.

Read Full Post »

A new national poll finds:

  • “A clear majority of Australian electors oppose the Gillard Government’s plan to introduce a carbon tax, 37% support the proposed carbon tax and 10% can’t say.”
  • “A majority (64%) believes that Australia’s proposed carbon tax will make no difference to the world’s climate.”
Political scientist James Fishkin, in his landmark book When The People Speak, writes: “Consider some of the limitations of mass opinion as we routinely find it in modern developed societies.” and then lists four problems with polls of the above sort:
  1. Citizens are ill-informed; indeed they are “rationally ignorant” because, being just one of millions, any individual’s opinion is likely to have so little effect that it makes no sense to put in the effort of becoming well-informed.
  2. “Opinions” reported in polls are frequently not genuine opinions at all; when people are forced to answer a question on a topic they know little or nothing about, they “choose an option, virtually at random.”
  3. When people do try to form an opinion on a topic, they tend to talk mostly with people just like themselves, thereby, frequently, just reinforcing their ill-informed and prejudiced views.
  4. Mass public opinion is vulnerable to manipulation.

So when you get stupid answers like the ones delivered in the poll, its because you’ve asked stupid questions. Or rather, you’ve asked questions stupidly. There’s nothing intrinsically stupid about a question like “Do you support the Gillard government’s plan to introduce a carbon tax?”  Rather, what’s stupid is the asking.  It is the whole practice of opinion polling as a mechanism for identifying the public’s viewpoint on important matters.

There must be better ways.

Fishkin has devoted much of his career to developing and promoting an alternative: deliberative polling.

We’re working on another.

Update, 5 July:

For an illuminating discussion see Australians and climate change – beliefs about public belief may be quite wrong and Polls, framings and public understandings: climate change and opinion polls by Joseph Reser.

Read Full Post »

Think of a collection of people as having a kind of collective mind.  How can you find out what that collective mind believes?

That may sound like a fanciful philosophical question, but it has very real, even urgent applications.  For example the IPCC is a collection of hundreds of scientists, and they put out reports supposedly embodying their consensus position – i.e. what they believe not as individuals but as a body of scientists.

There are of course already various methods for determining what people believe collectively; the IPCC have their own approach.   Such methods have various strengths and weaknesses.  For example, the IPCC approach is claimed to be riddled with political conflict.

A little while back, at Austhink, we came up with an alternative approach, which worked successfully in its first application.  We have used it a number of times since with various organisations, calling it the “Wisdom of Groups” method.

Here is a write-up of the first deployment.

___________________________

A few years back, the National Centre for Education and Training on Addiction and the South Australian Department of Health and Human Services put together a 3 day “Summer School” on the topic of addictions, inequalities and their interrelationships, with a view to providing guidance for policy makers.  They said that 20% of the South Australian budget is used to deal with problems of addiction, so this is a major issue.  They hoped to come up with a kind of Position Statement, which would summarise the consensus, if any, that the group of 50 or so participants reached during the Summer School.

They contacted Austhink hoping that we’d be able to help them with one aspect of it, namely making any debate/discussion/rational deliberation more productive. So initially the idea was that live argument mapping facilitation would be used with the whole group to help them work through some issues. But it became clear that they were open to ideas about how the Position Statement would be developed, and our involvement was increased to one of (a) developing a process for developing a Position Statement representing the group consensus, and (b) helping facilitate the overall Summer School to produce that Statement.

So we suddenly found ourselves faced with a very interesting challenge, which boiled down to:

  1. how do you figure out what, if anything, 50 participants with diverse backgrounds, interests, professional specializations, ideologies etc.. agree on:? i.e., how do you actually come up with a draft Position Statement?
  2. how do you rationally deliberate over that draft?
  3. how do you measure the degree to which the participants do in fact agree with any given aspect of that Statement – i.e., the extent to which the resulting draft Position Statement does in fact represent the consensus of the group?

This challenge has a lot in common with problems of democratic participation, of the sort that Deliberative Polling is intended to deal with.

Our approach, in a nutshell, was this:

Phase 1: Developing a Draft Statement

The first two days were occupied mostly with presentations by a range of experts (this had already been set up by the organizers; we had to work around that). We divided the Position Statement into three categories:

  • Definitions and Values
  • Empirical Realities; and
  • Directions.

At the end of the first day, participants filled out a worksheet asking them to nominate 5 distinct propositions in the Definitions and Values category, propositions which they regarded as true and worth including in any Position Statement. On the second day, they filled out similar workshops for Empirical Realities and Directions. Then, for each category, Paul Monk and I spent a few hours synthesizing the proposed propositions into a set of around 10 candidate statements for inclusion in the Position Statement. This involved sorting them into groups and then extracting the core proposition from each group. So this resulted in a set of about 32 statements. Note, however:

  • This process was democratic, in that it treated everyone’s contribution pretty much equally.
  • Nevertheless, no one of these statements was put forward by a majority of people. It simply wasn’t clear to what extent these statements represented the consensus view of the whole group.
  • Third, and somewhat parenthetically, it is worth noting that, in most cases, it was apparent to Paul and I that most participants had only a very partial and idiosyncratic understanding of the material they had been presented with.   The synthesized statement sets, however, were (in our opinion) very good “takes” on the material.  In other words, unsurprisingly, 50 brains really are a lot better than one (in most cases).  The trouble is synthesizing the thinking of 50 brains.

Phase 2: Deliberation

Half of day three was devoted to deliberating over selected statements, using real-time argument mapping with the Reasonable software. A whole-group session introduced the participants to the approach, and made some progress on a particular issue. In another session there were two groups with separate facilitators; each chose their own issues to debate.

Phase 3: Consensus

In the final phase all participants used an online questionnaire to register their attitudes towards each of the 32 propositions. Each participant was asked, for each statement, to choose Agree/Disagree/Abstain, Include/Exclude/Abstain, and was able to offer comments. The online system automatically and immediately collated the results, producing graphical (bar chart) displays of the level of consensus.

In the final session, the results were reviewed. We found that there was a surprisingly high level of consensus on almost all propositions; that, in other words, the draft Position Statement did in fact represent a consensus opinion of the group. Note also that the Position Statement is accompanied by a precise specification of the extent to which the group in fact thought that each component statement was (a) true, and (b) worth including.

The level of consensus regarding the Position Statement developed through the process is particularly noteworthy in light of the fear, expressed to us prior to the Summer School, that there would be such disagreement between the major groupings of participants (roughly, the “addictions” people, and the “inequalities” people), that there would be literally nothing (or nothing worth saying) that they could agree on.

We think that this technologically-augmented (in two ways: argument mapping software+projection, and online questionnaire) process could well be deployed again in a wide range of contexts in which groups get together and need to figure out what they think on a complex issue; and in particular, need to figure out what, if anything, they can agree on to form a basis for policy.

Read Full Post »