Archive for the ‘Hypothesis mapping’ Category

While in Northern Virginia last week for a workshop on critical thinking for intelligence analysts, I was able to also do a couple of presentations at The Mitre Corporation, a large US federally-funded R&D organisation.  Audience including Mitre researchers and other folks from the US intelligence community.   Thanks to Steve Rieber and Mark Brown for organising.

Here are the abstracts:

Deliberative Aggregators for Intelligence

It is widely appreciated that under appropriate circumstances “crowds” can be remarkably wise.  A practical challenge is always how to extract that wisdom.  One approach is to set up some kind of market.  Prediction markets are an increasingly familiar example.  Prediction markets can be effective but, at least in the classic form, have a number of drawbacks, such as (a) they only work for “verifiable” issues, where the truth of the matter comes to be objectively known at some point (e.g. an event occurs or it doesn’t by a certain date); (b) they encourage participants to conceal critical information from others; and (c) they create no “argument trail” justifying the collective view.

An alternative is a class of systems which can be called “deliberative aggregators.” These are virtual discussion forums, offering standard benefits such remote & asynchronous participation, and the ability to involve large numbers of participants.  Distinctively, however, they have some kind of mechanism for automatically aggregating discrete individual viewpoints into a collective view.

YourView (www.yourview.org.au) is one example of a deliberative aggregator.  On YourView, issues are raised; participants can vote on an issue, make comments and replies, and rate comments and replies.  Through this activity, participants earn credibility scores, measuring of the extent to which they exhibit epistemic virtues such as open-mindedness, cogency, etc.. YourView then uses credibility scores to determine the collective wisdom of the participants on the issue.

Although YourView is relatively new and has not yet been used in an intelligence context, I will discuss a number of issues pertaining to potential such usage, including: how YourView can support deliberation handling complex issue clusters (issues, sub-issues, etc;); and how it can handle open-ended questions, such as “Who is the likely successor to X as leader of country Y?”.

Hypothesis Mapping as an Alternative to Analysis of Competing Hypotheses

The Analysis of Competing Hypotheses is the leading structured analytic method for hypothesis testing and evaluation.  Although ACH has some obvious virtues, these are outweighed by a range theoretical and practical drawbacks, which may partly explain why it is taught far more often than it is actually used.  In the first part of this presentation I will briefly review the most important of these problems.

One reason for ACH’s “popularity” has been the lack of any serious alternative.  That situation is changing with the emergence of hypothesis mapping (HM).  HM is an extension of argument mapping to handle abductive reasoning, i.e. reasoning involving selection of the best explanatory hypothesis with regard to a range of available or potential evidence.  Hypothesis mapping is a software-supported method for laying out complex hypothesis sets and marshalling the evidence and arguments in relation to these hypotheses.

The second part of the presentation will provide a quick introduction to hypothesis mapping using an intelligence-type example, and review some of its relative strengths and weaknesses.

Read Full Post »

Draft of a section of a guide I’m working on.  Feedback welcome. 

Hypothesis investigation (short for “hypothesis-based investigation”) is simply attempting to determine “what is going on” in some situation by assessing various hypotheses or “guesses”.  The goal is to determine which hypothesis is most likely to be true. 

Hypothesis investigation can concern

  • Factual situations – e.g. what are current Saudi oil reserves?
  • Causes – e.g. what killed the dinosaurs?
  • Functions or roles – e.g. what was the Antikythera mechanism for?
  • Future events – e.g. how will the economy be affected by Peak Oil?
  • States of mind – e.g. what are the enemy planning to do?
  • Perpetrators – e.g. Who murdered Professor Plum?

Most investigation is to some extent hypothesis-based.  The exception is situations where the outcome is pre-determined in some way (e.g., a political show trial) and the point of the investigation is simply to amass evidence supporting that determination. 

A related, though subtly different notion is that of “hypothesis driven investigation” (Rasiel, 1999), in which a single hypothesis is selected relatively early in the process, and most effort is then devoted to substantiating this hypothesis.   It is hypothesis-based investigation with all attention focused on one guess, at least while not forced to reject it and consider another. 

Hypothesis investigation is comprised of three main activities

  • Hypothesis generation – coming up with hypotheses;
  • Hypothesis evaluation – assessing relative plausibility of hypotheses given the available evidence; and
  • Hypothesis testing – seeking further evidence.

Traps in Hypothesis Investigation

Hypothesis investigation fails, at its simplest, when we get (take as true) the wrong hypothesis.  This can have dismal consequences if costly actions are then taken.  Hypothesis investigation also fails when

  • there is misplaced or excessive confidence in a hypothesis (even if it happens to be correct);
  •  no conclusion is reached, when more careful investigation might have revealed that one hypothesis was most plausible. 

There are three main traps leading to these failures.

Tunnel vision

Not considering the full range of reasonable hypotheses.   Lots of effort is put into investigating one or a few hypotheses, usually obvious ones, while other possibilities are not considered at all.  All too often one of those others is in fact the right one. 

Abusing the evidence

Here the evidence already at hand is not evaluated properly, leading to erroneous assessments of the plausibility of hypotheses.

A particular item of evidence might be regarded as stronger or more significant than it really is, especially if it appears to support your preferred hypothesis.  Conversely, a “negative” piece of evidence – one that directly undercuts your preferred hypothesis, or appears to strongly support another – is regarded as weak or worthless.    

Further, the whole body of evidence bearing upon a hypothesis might be mis-rated.  A few scraps of dismal evidence might be taken as collectively amounting to a strong case. 

Looking in the wrong places

When seeking additional evidence, you instinctively look for information that in fact is useless or at least not very helpful in terms of helping you determine the truth.

In particular we are prone to “confirmation bias,” which is seeking information that would lend weight to our favoured hypothesis.  We tend to think that by accumulating lots of such supporting evidence, we’re rigorously testing the hypothesis.  But this is a classic mistake. We need to know not only that there’s lots of evidence consistent with our favoured hypothesis, but also that there is evidence inconsistent with alternatives.   You need to seek the right kind of evidence in relation to your whole hypothesis set, rather than just lots of evidence consistent with one hypothesis.  

This can have two unfortunate consequences.  The search may be

  • Ineffective – you never find evidnce which could have very strongly ruled one or more hypotheses “in” or “out”. 
  • Inefficient – the hypothesis testing process may take much more time and resources than it really should have. 

We fall for these traps because of basic facts of human psychology, hard-wired “features” of our thinking tracing back to our evolutionary origins as hunter-gatherers in small tribal units: 

  • We dislike disorder, confusion and uncertainty.  Our brains strive to find the simple pattern that makes sense of a complex or noisy reality. 
  • We don’t like changing our minds.  We find it easier to stick with our current opinion than to upend things and take  Further, we have undue preference for hypotheses that are consistent with our general background beliefs, and so don’t force us to question or modify those beliefs.  
  • We become emotionally engaged in the issues, and build affection for one hypothesis and loathing for others.   Hypothesis investigation becomes a matter of protecting one’s young rather than culling the pack (Chamberlin, 1965).
  • Social pressure.  We become publicly committed to a position, and feel that changing our minds would mean losing face. 

And of course we are frequently under time pressure, exacerbating the above tendencies.    

General Guidelines for Good Hypothesis Investigation

Canvass a wide range of hypotheses

Our natural tendency is to grab hold of the first plausible hypothesis that comes to mind and start shaking it hard.  This should be resisted.  From the outset you should canvass as wide a range of hypotheses as you reasonably can.  It is impossible to canvass all hypotheses and absurd to even try (Maybe 9/11 was the work of the Jasper County Beekeepers!).   But you can and should keep in mind a broad selection of hypotheses, including at least some “long shots.”   In generating this hypothesis set, diversity is at least as important as quantity.

You should continue seeking additional hypotheses throughout the investigation.   Incoming information can suggest interesting new possibilities, but only if you’re in a suitably “suggestible” state of mind.   

Actively investigate multiple hypotheses

At any given time you should keep a number of hypotheses “in play”.   In hypothesis testing, i.e. seeking new information, you should seek information which discriminates which will be “telling” in relation to multiple hypotheses at once. 

Seek disconfirming evidence      

Instead of trying to prove that some hypothesis is correct, you should be trying to prove that it is false.   As philosopher Karl Popper famously observed, the best hypotheses are those that survive numerous attempts at refutation.  
Ideally, you should seek to disconfirm multiple hypotheses at the same.   This can be easier if your hypothesis set is hierarchically organised, allowing you to seek evidence knocking out whole groups of hypotheses at a time.  

Instead of trying to prove that some hypothesis is correct, you should be trying to prove that it is false.   As philosopher Karl Popper famously observed, the best hypotheses are those that survive numerous attempts at refutation.  

Ideally, you should seek to disconfirm multiple hypotheses at the same.   This can be easier if your hypothesis set is hierarchically organised, allowing you to seek evidence knocking out whole groups of hypotheses at a time.  

Structured methodologies.

Some methodologies have been developed to help with hypothesis investigation.  The methodologies have some important advantages over proceeding in an “intuitive” or spontaneous fashion. 

  • They are designed to help us avoid the traps, and do so by building in, to some extent, the general guidelines above.
  • They provide distinctive external representations which help us organize and comprehend the hypothesis sets and the evidence.   These external representations reduce the cognitive load involved in keeping lots of information related in complex ways in our heads.

Some structured methodologies are:

  • Analysis of Competing Hypotheses (Heuer, 1999), designed especially for intelligence analysis
  • Hypothesis Mapping
  • Root Cause Analysis

Read Full Post »

“As I have said many times, it is simple, but not easy.” – Warren Buffett.

Buffett is of course talking about investment, but the same seems to me to be true of mapping (whether of the decision, argument or hypothesis variants).

The principles are simple enough.  What for example could be simpler to state and understand than the Rabbit Rule – and yet it is so profound, and has such power.

Mapping is not easy, in large part, because it is just a visual discipline for clarifying our thinking. And clarifying our thinking is not easy, even with visual discipline.

Read Full Post »

For quite a few years now Austhink Consulting has run a 3-day advanced argument mapping workshop using the JFK assassination as a case study.  Specifically, we used the trial arguments by Jim Garrison, as presented in the Oliver Stone movie JFK, that there must have been a conspiracy.  

We remained officially agnostic about the conclusion (i.e. whether there was or was not a conspiracy) though we leaned towards the position that the core arguments for conspiracy, while fascinating and good grist for an argument mapping workshop, are problematic.  However it does take a very close analysis, of the kind made possible by rigorous argument mapping, to see clearly where and why they fail.   

My Austhink colleague Paul Monk, who is a bit of a JFK buff and conspiracy sceptic, recently prepared a high-level hypothesis map providing the big picture – the major hypotheses (conspiracy, lone assassin, lone assassin plus accident), variants of those hypotheses, and an indication of the main arguments (a) for conspiracy, and (b) for Lee Harvey Oswald as the main assassin.

An image of the map showing first three levels only:


Download a pdf file of the whole map. 

As with any complex map, the best way to view it is from within bCisive, where you can use zoom and hide/show and layout facilities to improve “viewability”.  

Download the bCisive file.

Read Full Post »

Print-friendly version

Hypothesis mapping (HM) is diagramming the thinking involved in hypothesis investigation. Roughly speaking, in HM we draw “box and arrows” diagrams linking our main question (e.g., “Who killed JFK?”) with hypotheses, items of evidence, supporting arguments, etc..


For a more comprehensive version of this example see Megacryometeors Original (pdf).


Hypothesis investigation (HI) is determining which hypothesis is “most true” (or most likely to be true) in a given situation.  It includes generating adequate hypothesis sets, hypothesis evaluation (assessing relative plausibility of hypotheses given the evidence), and hypothesis testing (determining what evidence to obtain in order to conduct proper evaluation).

HM is an aid to diagnostic judgment.  In a simple tripartite classification of judgments, diagnostic judgment is addressing the question “What is going on?” (or “What will be going on?”)  Diagnostic processes attempt to ascertain “how things are” based on available or obtainable evidence.   (More questions which help convey what diagnostic judgement is for: What is happening? What is the problem?  What is the cause? What are they thinking? What is their strategy?)

HM makes the thinking involved in HI visual, and is thus able to exploit the massive processing power of our visual systems.

HM imposes structure on the thinking involved in HI, by requiring that information be classified and positioned on the map.

HM, when done properly, imposes discipline on the HI process.  There are rules or guidelines to be followed; there is expertise to be acquired.  HM can be done badly or it can be done well.  Doing it well requires understanding and observing the rules.

Hypothesis mapping, when done competently, promises a number of advantages relative to typical, ad hoc or informal approaches to HI.  Most importantly, it promises to improve one’s “hit rate” in HI, i.e. help you be more right more often in the conclusions you draw about what is going on.  It also aids in making the HI process more efficient and rigorous, sharing the thinking behind HI within a team, and making conclusions more defensible and accountable.

HM is a general purpose method.  It can be used in just about any domain – medicine, engineering, science, business, etc. etc.

However HM is particularly relevant to intelligence analysis.  HM should be seen as a new addition to the intelligence analyst’s toolkit.

As such, HM is an alternative to the well-known Analysis of Competing Hypotheses (ACH) method.  Of course somebody might want to use both methods, HM for some problems and ACH for others, or as two different frameworks for approaching the same problem.

HM and ACH have complementary strengths and weaknesses.  Fundamentally, HM is based on hierarchical structure, while ACH is based on matrix or table structure.  Both share the insight that bringing rigor to thinking about hypotheses requires conforming that thinking to explicit external (“outside the mind”) structures.  Any given type of structure will have certain advantages, but also certain costs.  The expert practitioner will be able to use the most appropriate tool for the task, with awareness and depth understanding of the strengths and weaknesses of the tool.

Arguably, some of the advantages of HM over ACH – particularly its more intuitive character, and more attractive visualisation – will lead to HM displacing ACH as the default tool for HI in intelligence.

HM, when done in a sophisticated fashion, involves the detailed articulation and assessment of arguments, and so draws on argument mapping.  In particular, HM cannot be done with full rigor without proper appreciation of the role of co-premises in argument structures.

The best way to do HM is to use software designed to support HM activities.  bCisive 2 is currently the leading software for HM (as well as decision mapping and argument mapping).

Austhink Consulting, which has been pioneering hypothesis mapping, provides training, facilitation and consulting services in HM.

See also:

What is Argument Mapping?
What is Decision Mapping?

Read Full Post »

Slides from a presentation at an intelligence & security seminar in Canberra last week.

Thanks to Brett Peppler for getting me the gig.

Read Full Post »