On Thursday 9th October I’m doing a presentation at a conference of The Tax Institute, the Australian professional association for tax specialists, introducing decision analysis techniques.  The presentation will illustrate (with live demonstration) the following applications:

  • Using quantitative risk analysis (Monte Carlo simulation) to help a client gain better insight into the probable or possible outcomes of a certain tax strategy;
  • Using decision trees to help a client decide whether to purse a dispute with the Tax Office through the courts.

The conference paper is available here.

An excerpt:

Decision analysis techniques are well-developed and used, more or less widely, in various other professions such as engineering and finance. However, they are rarely used by tax specialists, or by lawyers and accountants more broadly.

Why? One perspective is that decision analysis is fundamentally ill-suited to the kinds of reasoning and decision making involved in tax matters, which are thought to involve unquantifiable issues and nuances requiring intuitive nous of the kind only highly trained and experienced legal or accountants can provide.

An alternative perspective is that tax matters would almost always benefit from decision analysis, and that tax specialists fail to use it only because they are trapped behind boundaries imposed by their professional traditions, their training, and their intellectual inertia. A strong version of this view is that tax specialists are derelict in failing to provide their clients with an easily-obtainable level of clarity and rigour.

In the spirit of John Stuart Mill, this paper takes a middle position. It suggests that decision analysis is potentially useful for certain types of problems regularly handled by tax specialists, while not being appropriate for many others. Decision analysis may represent an important opportunity for tax specialists to provide greater value to sophisticated clients.

1.2.1 Three Thinking Modes

At a high level, the relation of decision analysis to the kinds of intellectual labour generally undertaken by tax specialists is summarized in this diagram.


To indulge in some useful caricatures, qualitative thinking is the domain of the lawyer. It uses no numbers at all, or at most simple arithmetic. The central concept is the argument. Making the most important decisions is always a matter of “weighing up” arguments expressed in the legally-inflected natural language

Quantitative deterministic thinking is the speciality of the accountant. It is epitomised in the structures and calculations in an ordinary spreadsheet, in which specified inputs are “crunched” into equally specific outputs. The central concept is calculation; uncertainties are replaced by “assumptions”. Decisions generally boil down to comparing the magnitudes of numerical outputs, in the penumbral light cast by the background knowledge, intuitions and biases of the decision maker.

The third mode of thinking, probabilistic, is of course the decision analyst’s territory. The central concepts is uncertainty, and the essential gambit is framing and manipulating probabilistic representations of uncertainty.

In this context, the “master” tax specialist has facility, or even advanced expertise, in all three modes of thinking.

There’s a familiar idea from the world of sport – that winning requires an elite team and not just a team of elite players.

Does something similar apply in the world of decision making?

In many situations, critical decisions are made by small groups.  The members of these groups are often “elite” in their own right.  For example, in Australia monthly interest rate decisions are made by the board of the Reserve Bank of Australia.  This is clearly a “team” of elite decision makers.

However it is not clear that they are an elite team of decision makers.   For current purposes, I define an elite decision team as a small decision group conforming to all or at least most of the following principles:

  1. The team operates according to rigorously thought-through decision making practices. Wherever possible these practices should be strongly evidence-based.
  2. The team has been trained to operate as a team using these practices. Members have well-defined and well-understood roles.
  3. Members have been rigorously trained as decision makers (and not just as, say, economists).
  4. The team, and members individually, are rigorously evaluated for their decision making performance.
  5. There is a program of continuous improvement.

Note also that the team should be a decision making team, i.e. one that makes decisions (commitments to courses of action) rather than judgements of some other kind such as predictions.

There are many types of teams which do operate according to analogs of these principles – for example elite sporting teams, as mentioned, and small military teams such as bomb disposal squads.  These teams’ operations involve decision making, but they are not primarily decision making teams.

I doubt the Board of the RBA is an elite decision team in this sense, but would be relieved to find out I was wrong.

More generally, I am currently looking for good examples of elite decision teams.  Any suggestions are most welcome.

Alternatively, if you think this idea of an elite decision team is somehow misconceived, that would be interesting too.

Well-known anti-theist Sam Harris has posted an interesting challenge on his blog.  He writes:

So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000,* and I will publicly recant my view. 

In the previous post on this blog, Seven Habits of Highly Critical Thinkers, habit #3 was Chase Challenges.  If nothing else, Harris’ post is a remarkable illustration of this habit.

The quality of his case is of course quite another matter.

I missed the deadline for submission, and I haven’t read the book, and don’t intend to, though it seems interesting enough. So I will just make a quick observation about the quality of Harris’ argument as formulated.

In a nutshell, simple application of argument mapping techniques quickly and easily show that Harris’ argument, as stated by Harris himself on the challenge blog page, is a gross non-sequitur, requiring, at a minimum, multiple additional premises to bridge the gap between his premises and his conclusions.  In that sense, his argument as stated is easily shown to be seriously flawed.

Here is how Harris presents his argument:

1. You have said that these essays must attack the “central argument” of your book. What do you consider that to be?
Here it is: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

This formulation is short and clear enough that creating a first-pass argument map in Rationale is scarcely more than drag and drop:


Now, as explained in the second of the argument mapping tutorials, there are some basic, semi-formal constraints on the adequacy of an argument as presented in an argument map.

First, the “Rabbit Rule” decrees that any significant word or phrase appearing in the contention of an argument must also appear in at least one of the premises of that argument.  Any significant word or phrase appearing in the contention but not appearing in one of the premises has suddenly appeared out of thin air, like the proverbial magician’s rabbit, and so is informally called a rabbit.  Any argument with rabbits is said to commit rabbit violations.

Second, the Rabbit Rule’s sister, the “Holding Hands Rule,” decrees that any significant word or phrase appearing in one of the premises must appear either in the contention, or in another premise.

These rules are aimed at ensuring that the premises and contention of an argument are tightly connected with each other.  The Rabbit Rule tries to ensure that every aspect of what is claimed in the contention is “covered” in the premises.  If the Rabbit Rule is not satisfied, the contention is saying something which hasn’t been even discussed in the premises as stated.  (Not to go into it here, but this is quite different from the sense in which, in an inductive argument, the contention “goes beyond” the premises.) The Holding Hands Rule tries to ensure that any concept appearing in the premises is doing relevant and useful work.

Consider then the basic argument consisting of Contention 1 and the premises beneath it.   It is obvious on casual inspection that much – indeed most – of what appears in Contention 1 does not appear in the premises.  Consider for example the word “purview”, or the phrase “falls within the purview of science”.  These do not appear in the premises as stated. What does appear in Premise 2 is “natural phenomena, fully constrained by the laws of the universe”.  But as would be obvious to any philosopher, there’s a big conceptual difference between these.

What Harris’ argument needs, at a very minimum, is another premise.  My guess is that it is something like “Anything fully constrained by the laws of the universe falls within the purview of science.”   But two points.  First, this suggested premise obviously needs (a) explication, and (b) substantiation.  In other words, Harris would need to argue for it, not assume it. Second, it may not be the Harris’ preferred way of filling gaps (one of them, at least) between his premises and his conclusion.  Maybe he’d come up with a different formulation of the bridging premise.  Maybe he addresses this in his book.

It would be tedious to list and discuss the numerous Rabbit and Holding Hands violations present in the two basic arguments making up Harris’ two-step “proof”.   Suffice to say, that if both Rabbit Rule and Holding Hands Rule violations are called “rabbits” (we also use the term “danglers”), then his argument looks a lot like the famous photo of a rabbit plague in the Australian outback:


Broadly speaking, fixing these problems would require quite a bit of work:

  • refining the claims he has provided
  • adding suitable additional premises
  • perhaps breaking the overall argument into more steps.

Pointing this out doesn’t prove that his main contentions are false.  (For what little it is worth, I am quite attracted to them.)  Nor does it establish that there is not a solid argument somewhere in the vicinity of what Harris gave us. It doesn’t show that Harris’ case (whatever it is) for a scientific understanding of morality is mistaken.  What it does show is that his own “flagship” succinct presentation of his argument (a) is sloppily formulated, and (b) as stated, clearly doesn’t establish its contentions.   In short, as stated, it fails.  Argument mapping reveals this very quickly.

Perhaps this is why, in part, there is so much argy bargy about Harris’ argument.

Final comment: normally I would not be so picky about how somebody formulated what may be an important argument.  However in this case the author was pleading for criticism.

Some people excel at critical thinking; others, not so much. Scientist Carl Sagan and investor Charlie Munger are oft-mentioned exemplars; my friend and colleague Paul Monk is less famous but also impressively sharp. On the other side we have… well, Homer Simpson can stand in for all those it would be rude to name.

But what makes a thinker more highly critical than others? And how can any person lift their game? This can be explored through the notion of habits. Highly critical thinkers have developed many habits which help them think more effectively. With sufficient commitment and patience, and perhaps a little coaching, such habits can be acquired by the rest of us.

This post describes seven major habits of highly critical thinkers. The list is obviously inspired by the hugely successful book about highly effective people. Whatever one might think of that book, if a similar exercise for critical thinking could have even a tiny fraction of its impact, it would be well worth undertaking.

Everybody is familiar with the term “critical thinking,” and has a reasonable working sense of what it is, but there is much disagreement about its proper definition. There’s no need to enter that quagmire here. Suffice to say that critical thinking, for current purposes, is truth-conducive thinking, i.e., thinking that leads to correct or accurate judgements. It is, in a phrase I like to use, the art of being right – or at least, of being more right more often.

But what kind of thinking conduces to truth? What is this subtle art? Back in the early seventeenth century, the philosopher Francis Bacon characterised it this way:

For myself, I found that I was fitted for nothing so well as for the study of Truth; as having a mind nimble and versatile enough to catch the resemblances of things … and at the same time steady enough to fix and distinguish their subtler differences; as being gifted by nature with desire to seek, patience to doubt, fondness to meditate, slowness to assert, readiness to consider, carefulness to dispose and set in order; and as being a man that neither affects what is new nor admires what is old, and that hates every kind of imposture.

Four hundred years later, political scientist Philip Tetlock conducted extensive and rigorous studies of hundreds of experts in the political arena, focusing on their ability to forecast. He found that the experts fell into two main groups:

One group of experts tended to use one analytical tool in many different domains; they preferred keeping their analysis simple and elegant by minimizing “distractions.” These experts zeroed in on only essential information, and they were unusually confident—they were far more likely to say something is “certain” or “impossible.” In explaining their forecasts, they often built up a lot of intellectual momentum in favor of their preferred conclusions. For instance, they were more likely to say “moreover” than “however.”

The other lot used a wide assortment of analytical tools, sought out information from diverse sources, were comfortable with complexity and uncertainty, and were much less sure of themselves—they tended to talk in terms of possibilities and probabilities and were often happy to say “maybe.” In explaining their forecasts, they frequently shifted intellectual gears, sprinkling their speech with transition markers such as “although,” “but,” and “however.”

The second group, the “foxes,” were better forecasters than the first, the “hedgehogs.” Foxy thinking, it seems, is more truth-conducive than hedgehoggery.

Two points jump out from these quotes. First, the two accounts have much in common, underneath the differences in style. The essence of critical thinking is largely stable across the centuries.

Second, they are both describing what good thinkers tend to do. Theorists of critical thinking have various ways of thinking about these tendencies; some talk of dispositions, others of virtues. Here I take what may be a novel approach and consider them as acquirable habits.

A habit is just a propensity to take actions of a certain kind in a relatively automatic or reflexive manner. And as we all know, and as elaborated in the recent book by Charles Duhigg, The Power of Habit, good habits can be cultivated, and bad habits overcome. So the goal here is to list:

  • propensities to do things of certain kinds more or less automatically under appropriate circumstances; which propensities are
  • possessed by highly critical thinkers much more often than by ordinary folk, and which
  • help them to make more correct or accurate judgements, and
  • could be picked up, or further developed, by any ordinary person with a reasonable amount of effort; with the result that
  • they would themselves become more critical.

The habits described below are the kinds of things highly critical thinkers really do do. They are not merely prescriptions or guidelines which would help anyone to be more critical if anyone were disciplined or virtuous enough to follow them.

To illustrate: Blogger Shane Parrish reports that a hedge fund manager and author, Michael Maubousson, asked the Nobel-winning psychologist Daniel Kahneman what a person should do to improve their thinking. “Kahneman replied, almost without hesitation, that you should go down to the local drugstore and buy a very cheap notebook and start keeping track of your decisions.”

Now, it is plausible that keeping track of your decisions in a notebook would improve your thinking. However, it is not a habit of highly critical thinkers, at least in my experience. I don’t recall ever observing a highly critical thinker doing it, or hearing one say they do it. I don’t even do it myself, even after hearing the great Laureate’s advice (and apparently Maubousson doesn’t either).

And so to the habits themselves.

1. Judge judiciously

One of the most salient thinking traps is, in the common phrase, jumping to conclusions. Highly critical thinkers have cultivated four main habits which help them avoid this.

First, they tend to delay forming a judgement until the issue, and the considerations relevant to it, have been adequately explored, and also until any hot emotions have settled (Bacon’s “slowness to assert”).

Second, they tend to abstain altogether from making any judgement, where there are insufficient grounds to decide one way or another. They feel comfortable saying, or thinking, “I don’t know.”

Third, when they do make a judgement, they will treat it as a matter of degree, or assign a level of confidence to it, avoiding treating any non-trivial issue as totally certain.

And fourth, they treat their judgements as provisional, i.e., made on the basis of the evidence and arguments available at the time, and open to revision if and when new considerations arise.

2. Question the questionable

Much more often than ordinary folk, highly critical thinkers question or challenge what is generally accepted or assumed. Sometimes they question the “known knowns” – the claims or positions which constitute widely-appreciated truths. Other times, they target the implicit, the invisible, the unwittingly assumed.

Highly critical thinkers do not of course question everything. They are not “radical skeptics” doubting all propositions (as if this was even possible anywhere other than in philosophical speculation). Rather, they tend to be selective or strategic in their questioning, targeting claims or positions that are worth challenging, whether in some practical or intellectual sense. They are skilled in identifying or “sniffing out” the “questionable,” i.e. claims which are potentially vulnerable, and whose rejection may have important or useful implications.

3. Chase challenges

We all know that feeling of instant irritation or indignation when somebody dares to suggest we might be wrong about something. Highly critical thinkers have cultivated various habits counteracting this reaction – habits which actually lead to them being challenged more often, and benefiting more from those challenges.

For example, while we mostly seek and enjoy the company of those who share our views, highly critical thinkers make an effort to engage those of a contrary opinion, tactfully eliciting their objections. And when fielding such challenges, highly critical thinkers resist the instinct to ignore, reject or rebut. They will be found doing such seemingly perverse things as rephrasing the objections to be sure of understanding them, or even to render them even more powerful. Charlie Munger is quoted as saying “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Another habit of highly critical thinkers is reading widely, and especially reading from sources likely to present good quality contrary views and arguments. Finding themselves drawn to a position (e.g., that William Shakespeare of Stratford was unlikely to have been the author of the works attributed to him) highly critical thinkers seek out the best presentations of the orthodox position. In short, they strive to test or “prove” their views, rather than support or defend them.

4. Ascertain alternatives

Highly critical thinkers are always mindful that what they see before them may not be all there is. They habitually ask questions like: what other options are there? What have we missed? As opposed to/compared with what? They want to see the full range of relevant alternatives before passing judgement.

For example, when considering a difficult decision, they put extra effort into searching for – or creating – courses of action outside the standard, provided or obvious ranges. When trying to explain why something happened, they will allocated more time than most people do to expanding the range of hypotheses under consideration. In a negotiation, they seek to develop new, mutually acceptable solutions rather than “horse-trading” on existing positions.

5. Make use of methods

When considering a course of action, a critical thinker of my acquaintance, who happened to be successful banker and company director, said she always asked herself two simple questions: (1) what’s the worst thing that could happen here? and (2) what’s the best thing that could happen? The first question prompts us to search for potential drawbacks a bit more thoroughly than we might otherwise have done. The routine amounts to a rudimentary (or “fast and frugal”) risk analysis.

This example illustrates how highly critical thinkers habitually deploy suitable methods to structure their thinking and improve the conclusions. Another example: in psychology department colloquiua I used to attend, participants, after hearing a colleague present their work, would reflexively use a method I call scenario testing. This involves diligently and creatively searching for scenarios in which their colleague’s conclusions are false, even though their premises (data) are true. To the extent that plausible scenarios of this kind can be identified, the inferences from the premises to the conclusions are suspect.

There are literally scores of methods one might use. Some, like the rudimentary risk analysis mentioned above, are simple and informal, and can be quickly learned and exploited by almost anyone. Others are elaborate, technical and may require specialist training (e.g., rigorous argument mapping, or full quantitative risk analysis). Generally, the more sophisticated the method, the less widely it is used, even by the most highly critical thinkers. Every such thinker has built up their own repertoire of methods. What’s most important is not so much their particular selection, but the fact that they habitually deploy a wider range of methods, more often, than ordinary folk.

6. Take various viewpoints

Highly critical thinkers well understand that their view of a situation is unique, partial and biased, no matter how clear, compelling and objective it seems. They understand that there will always be other perspectives, which may reveal important aspects of the situation.

Of course, most people appreciate these points to some degree. The difference is that highly critical thinkers are especially keen to profit from a more complete understanding, and so have cultivated various habits of actually occupying, as best they can, those other viewpoints, so as to see for themselves what additional insights can be gained.

One such habit is trying to “stand in the shoes” of a person with whom we may have some conflict, or are inclined to criticise. Another is to adopt the persona of a person, perhaps a hypothetical person, who strongly disagrees with your views, and to argue against yourself as strongly as they would. A third (relatively rare) is to take the perspective of your future self, having found out that your current position turned out to be wholly, and perhaps disastrously, wrong. What do you see, from the future, that you are missing now?

7. Sideline the self

People tend to be emotionally attached to views. Core beliefs, such as provided by religions or ideologies, help provide identity, and the comforts of clarity and certainty. Sometimes pride binds us to positions; having publicly avowed and defended them previously, it would be humiliating to concede we were wrong. Highly critical thinkers have habits which help to sever these emotional bonds between self and beliefs, allowing the thinker to discard or modify beliefs as indifferently as a used car dealer will trade vehicles. Highly critical thinkers have in other words learned how to sideline the self, removing it from the field of epistemic play.

One habit is to avoid verbally identifying oneself with positions by using distancing locutions. Instead of saying things like “Its obvious to me that…” they will say things like “one plausible position is that”. A similar technique is to give positions names. Instead of boldly asserting that Shakespeare must have written the works, publicly committing yourself to this view, say “According to the Stratfordian view…”.

To be continued…

This post is already much longer than originally intended, but still leaves much unsaid. A few quick final points:

  • The current list can’t claim to be definitive. Others may well come up with different lists.
  • It is also a work in progress. I hope to elaborate each of the major habits in separate posts.
  • Clearly much more could be said about the notion of a habit, and the somewhat paradoxical character of critical thinking habits, which generally involve automatically (“without thinking about it”) engaging in thinking activities.
  • This list is not based on rigorous empirical research, though in places it is informed by such research. There is much scope for scientific clarification here. Tetlock’s studies provide an impressive model.

Comments are most welcome.

I’ve had the following abstract accepted for a presentation at a conference in December at the University of Melbourne, Higher Education Research & the Student Learning Experience in Business.

A Pragmatic Definition of Critical Thinking for Business

This presentation will lay out a pragmatic definition of critical thinking.  It doesn’t purport to be the definitive characterization of what critical thinking is. Rather, it is offered as a convenient framework for understanding the nature and scope of critical thinking, which may be useful for purposes such as developing a dedicated subject in critical thinking for business, improving the teaching of critical thinking within existing subjects, or evaluating the effectiveness of a business course in developing critical thinking.

The definition is constructed around five commitments:

    • First, the essence of critical thinking is correct or accurate judgement. That is, to think critically is to think in ways that are conducive to being “more right more often” when making judgements.
    • Second, “being more right more often” can be achieved through the skillful application of general thinking methods or techniques.
    • Third, these techniques range on a spectrum from the simple and easily acquired to technical methods which require special training.
    • Fourth, for all but the simplest of methods, there are degrees of mastery in application of these techniques.
    • Fifth, there are many different kinds of judgements made in business, including decision making, prediction, estimation, (causal) explanation, and attribution of responsibility. For each major type of judgement, there are typical pitfalls, and a range of critical thinking methods which can help people avoid or compensate for those pitfalls.

These commitments enable us to define a kind of three-dimensional chart representing the critical thinking competency of any individual. Along one (categorical) axis is the various kinds of judgements (decision making, etc.). Another axis represents the spectrum from simple through to advanced critical thinking methods. Particular methods can then be placed in appropriate “boxes” in the grid defined by these axes. A person will have a degree of mastery of the methods in each box; this can be represented on a third dimension. A person’s critical thinking competency is thus a distinctive “landscape” formed by the varying levels of mastery.

This characterisation is tailoring, for business, a more general pragmatic approach to understanding critical thinking.  About a year ago I developed this approach in preparation for a workshop in the US on development of a test of critical thinking for intelligence analysts; my role in the workshop was to lay out a general framework for understanding what critical thinking is.   That approach was described in a manuscript Dimensions of Critical Thinking.

I’m also supporting a team from the University of Sydney Business School, who have had the following abstract accepted:

Evaluating critical thinking skill gains in a business subject

Helen Parker, Leanne Piggott, Lyn Carson
University of Sydney Business School
Tim van Gelder
University of Melbourne and Austhink Consulting

Critical thinking (CT) is one of the most valued attributes of business school graduates, and many business school subjects claim to enhance it. These subjects frequently implement pedagogical strategies of various kinds aimed at improving CT skills. Rarely however are these efforts accompanied by any rigorous evaluation of CT skill gains. But without such evaluation, it is difficult to answer questions such as:

    • Are our students’ CT skills in fact improving? By how much?
    • Are those skills improving more than they would have even without our special CT instruction?
    • Are the marginal gains worth the cost?
    • Are our attempts to improve our instruction from semester to semester making any difference?

These kinds of questions are particularly relevant to the University of Sydney Business School, which has an entire subject dedicated to improving CT (BUSS5000 – Critical Thinking in Business), enrolling some 800 students per semester. Consequently, in 2013, the Business School embarked on a large-scale, multi-year evaluation program. The evaluation is based on pre- and post-testing using an independent objective test (the Halpern Critical Thinking Assessment), whose coverage overlaps with the range of critical thinking skills taught in the subject. This presentation will give an overview of the approach it has adopted. It will discuss some of the challenges and pitfalls in the testing process, and how to interpret results. Finally, it will present data and insights from the first semester of full-scale evaluation. The session should be of interest to anyone interested in evaluating CT skills, or more generally in how business school education can enhance CT.

There’s an obvious complementarity between these two topics.

We knew things were pretty dire, but a new poll has put some numbers onto our fears.

The “Citizen’s Agenda” survey from the University of Melbourne has found that voters are “pretty appalled” at the standard of political debate, with 57% of voters saying things are getting noticeably worse. Not surprisingly, the overall level of interest in politics is sliding as well.

These numbers underscore the numerous criticisms made in recent years by people who’ve been in the political trenches. Diverse luminaries such as Barry Jones, Lindsay Tanner, and Malcolms Fraser and Turnbull have complained that public political debate has never been so bad.

What’s happening here? Why, when Australians are more educated and connected than ever before, is political discourse being degraded?

A natural instinct is to search for somebody or something to blame – some dark force degrading public discourse for its own greedy purposes. The news media and politicians are popular suspects; others point to campaign managers, advertisers and spin merchants; or to television, the internet, or mobile devices.

Alternatively, we can view the problem through the lens of a simple and familiar metaphor: that people are, increasingly, just not playing by the rules.

Consider chess – a game with a limited set of clear and accepted rules. Rule-governed play typically delivers a clear outcome, with everyone agreeing who won or lost.

If we think of public debate as a kind of game, then the rules are the laws and conventions of logic and disputation, as articulated by logicians and rhetoricians over the centuries.

Public debate is of course not a game. It is a deadly serious business, often literally so. But that just makes it all the more important that people respect the rules.

So what’s going wrong?

One problem is that people often don’t really know what the rules are. For the most part, they have never been educated in logic and disputation, and would be pressed to give any account of the rules. It’s hard to play correctly when you’re foggy about what’s OK and what’s not.

Second, there will always be incentives to cheat. This can be straightforwardly foul play, like a rugby player throwing a punch under cover of a maul. Witness climate change deniers who trot out the argument that temperature hasn’t increased since 1998, no matter how many times its flaws have been decisively exposed.

Worse than breaking the rules is subverting them. This amounts to changing the game, or even destroying the game entirely. This sounds extreme, but Paul Krugman and others have been accusing US Republicans of precisely this gambit – of “refusing to live in an evidence-based world.”

Third, there’s no effective umpire. There’s no authority or expert whose role is to judge what’s legitimate and whose calls are accepted by all the players.

Finally, we all suffer from the problem that our mental machinery is poorly designed for the task. Public debates can get complicated, and the brains bequeathed to us by evolution don’t have enough “RAM” to comprehend the evolving state of play. And we are all subject a wide range of cognitive biases which reliably lead us to make errors of logic and to violate norms of constructive debate.

Surveying these factors, the prospects for any substantial improvement seem remote. Three – the incentive to cheat, the lack of an independent umpire, and cognitive limitations – are deep features of what is sometimes called “the human condition.” Education can in principle help people know what the rules are, but is a slow and unreliable way to effect social change.

Fortunately there is another option.

Think of public debate as taking place in various arenas. The floor of parliament is one; commentary in the mainstream media is another. The internet has allowed the emergence of new online arenas such as the blog- and Twitter-spheres.

The subtle but critical point is that these various arenas promote or discourage playing by the rules in different ways. Twitter, for example, makes complex chains of reasoning almost impossible, and promotes follow-chambers in which contrary views are all too easily ignored or ridiculed. Another example is comment forums on news websites, which encourage trolling by having the discussion open to all, and allowing anonymity (via pseudonymity).

However the programmability of the internet makes possible a great variety of arenas, and new ones aimed at improving public debate, and democracy more broadly, are proliferating around the world. Oursay, a partner in the Citizen’s Agenda project, is just one example, increasingly prominent in Australia.

Some of these forums are being designed to gently guide participants towards higher quality participation in political debate. One way to do this is to scaffold “nudge” participants to stick to the rules more often, perhaps by giving greater prominence to those who do.

An example is the German “Faktencheck” (Fact Check) project, which works in collaboration with mainstream media entities such as Frankfurther Allgemeine Zeitung to host public debates on current political issues.

Our project, YourView, is another example.

Forums such as these will continue to evolve and play an increasingly large role in public political debate. Lindsay Tanner has spoken of a “revolt of the engaged”. This revolt can go beyond just online fundraising and petitioning; if the forums are designed correctly, it can start to halt or reverse the slide in the quality of public political debate.

This is an excerpt from a chapter to appear in a volume edited by Margaret Simons on “new media entrepreneurs”. 

What is the bare minimum a citizen needs to know in order to have a reasonable, informed opinion on a major public issue? This is not a trick question. Boiled down to the basics, a citizen needs to know what the issue is, the basic facts, and the key arguments for and against.

Consider negative gearing of real estate investments. Perhaps you think it is wise economic policy, or perhaps you think it an expensive rort. Either way, you really ought to know what negative gearing is (not everybody does.)  You should know critical facts such as how much it costs the government each year, who gets the benefits, and what other effects it might have, such as making rental housing more available and affordable. You’d need to be aware of the best arguments for keeping it on one hand, and the best arguments for abolishing it on the other.

Of course, having the bare minimum knowledge does not automatically lead to a reasonable opinion, and ideally a citizen would know much more than the bare minimum about the merits of negative gearing as one component of an efficient, equitable and sustainable taxation system.  My point is just that unless you have at least the bare minimum then your opinion is seriously ill-founded.

The trouble is, citizens often don’t have this kind of minimum knowledge. Choose an Australian adult and a major public issue at random and chances are that if they understand the issue at all they will be ignorant of key facts,  or misinformed and unaware of major arguments.

For example, I thought I had a pretty good understanding of negative gearing; indeed I’d even negatively geared the occasional investment. But when I sat down to draft a succinct summary of the pros and cons of negative gearing as a tax policy, I immediately discovered how incomplete and uncertain my knowledge was. It took the better part of a day of reading online, filtering, digesting, sorting and drafting to come up with a short written summary of what I needed to know. Only then did I really appreciate how half-baked my previous views had been.

Now, I will not rehearse here the reasons why this kind of ignorance is a problem for democracy. Nor will I heap blame on the usual suspects. Nor will I hand wave about how the schools, or the government, or the media, or someone should be doing something about it. Finally, I will not indulge any utopian fantasy of a fully informed citizenry.

Rather, I’ll make a simple suggestion.

It is not too hard for someone with suitable expertise to assemble the bare minimum information on a given issue in a short article with a fairly standard structure. You could call this a “backgrounder,” or an “explainer”. I like the phrase “issue clarifier”.

The suggestion is that for all major public issues, these clarifiers be produced and made easily accessible. Then, any interested citizen could rapidly obtain the most essential information on any issue whenever they wanted it. This alone wouldn’t solve the ignorance problem, but it would surely help.

An issue clarifier is a journalistic product. Writing issue clarifiers is a kind of journalism. Doing it well requires broad awareness of the political landscape, the ability to research, analyse and synthesize, and to write succinctly and clearly for a wide audience.  In our democratic system, we usually regard journalism as having a special responsibility for keeping the citizenry informed. Since issue clarifiers would obviously be useful in this regard, they should already be a mundane feature of the media landscape.

In short, my suggestion should be redundant. But it is not. Nowhere in the major media can you easily find such clarifiers. Very occasionally, something along these lines appears, but it is quickly lost under the torrents of news, the deluges of opinion, and the tsunamis of mass distraction such as sport, cooking, fashion, celebrity gossip, and so forth.

Why? Partly because issue clarifiers can be a bit  dull. They aren’t breaking news; they don’t exploit our appetite for the latest, freshest and most titillating.  Unlike opinion pieces, they don’t incite our tribal instincts. Being even-handed, they don’t comfort by stroking our prejudices, or enrage by challenging our convictions. The media survive by attracting attention, and issue clarifiers will generally struggle to compete.

It might also be argued that issue clarifiers are superfluous. The media already provide far more information and debate about major public issues than could ever be conveyed in a short issue clarifiers. Why add to this abundance?

It was once said that there are two ways to keep decision makers in the dark. One way is providing too little information; the other is providing too much. Similarly, the vast quantity of fast-changing news and vigorous debate in the media may actually be counterproductive, with respect to the goal of helping the public be basically well-informed on most major issues. Rather than educating, the net effect may be to bewilder and alienate; or to leave people under the illusion that they have much better knowledge than they do.

This can be seen as a market failure. There’s an obvious public good, the provision of issue clarifiers, not being addressed by “business as usual” in the Australian media. So, following the adage that a problem is merely a situation which has not yet been turned to your advantage, there is also here an opportunity. Can a new media player t this empty niche?

This is one way of looking at the YourView project…


Get every new post delivered to your Inbox.

Join 395 other followers