Feeds:
Posts
Comments

Archive for the ‘ACH’ Category

While in Northern Virginia last week for a workshop on critical thinking for intelligence analysts, I was able to also do a couple of presentations at The Mitre Corporation, a large US federally-funded R&D organisation.  Audience including Mitre researchers and other folks from the US intelligence community.   Thanks to Steve Rieber and Mark Brown for organising.

Here are the abstracts:

Deliberative Aggregators for Intelligence

It is widely appreciated that under appropriate circumstances “crowds” can be remarkably wise.  A practical challenge is always how to extract that wisdom.  One approach is to set up some kind of market.  Prediction markets are an increasingly familiar example.  Prediction markets can be effective but, at least in the classic form, have a number of drawbacks, such as (a) they only work for “verifiable” issues, where the truth of the matter comes to be objectively known at some point (e.g. an event occurs or it doesn’t by a certain date); (b) they encourage participants to conceal critical information from others; and (c) they create no “argument trail” justifying the collective view.

An alternative is a class of systems which can be called “deliberative aggregators.” These are virtual discussion forums, offering standard benefits such remote & asynchronous participation, and the ability to involve large numbers of participants.  Distinctively, however, they have some kind of mechanism for automatically aggregating discrete individual viewpoints into a collective view.

YourView (www.yourview.org.au) is one example of a deliberative aggregator.  On YourView, issues are raised; participants can vote on an issue, make comments and replies, and rate comments and replies.  Through this activity, participants earn credibility scores, measuring of the extent to which they exhibit epistemic virtues such as open-mindedness, cogency, etc.. YourView then uses credibility scores to determine the collective wisdom of the participants on the issue.

Although YourView is relatively new and has not yet been used in an intelligence context, I will discuss a number of issues pertaining to potential such usage, including: how YourView can support deliberation handling complex issue clusters (issues, sub-issues, etc;); and how it can handle open-ended questions, such as “Who is the likely successor to X as leader of country Y?”.

Hypothesis Mapping as an Alternative to Analysis of Competing Hypotheses

The Analysis of Competing Hypotheses is the leading structured analytic method for hypothesis testing and evaluation.  Although ACH has some obvious virtues, these are outweighed by a range theoretical and practical drawbacks, which may partly explain why it is taught far more often than it is actually used.  In the first part of this presentation I will briefly review the most important of these problems.

One reason for ACH’s “popularity” has been the lack of any serious alternative.  That situation is changing with the emergence of hypothesis mapping (HM).  HM is an extension of argument mapping to handle abductive reasoning, i.e. reasoning involving selection of the best explanatory hypothesis with regard to a range of available or potential evidence.  Hypothesis mapping is a software-supported method for laying out complex hypothesis sets and marshalling the evidence and arguments in relation to these hypotheses.

The second part of the presentation will provide a quick introduction to hypothesis mapping using an intelligence-type example, and review some of its relative strengths and weaknesses.

Advertisements

Read Full Post »

Draft of a section of a guide I’m working on.  Feedback welcome. 

Hypothesis investigation (short for “hypothesis-based investigation”) is simply attempting to determine “what is going on” in some situation by assessing various hypotheses or “guesses”.  The goal is to determine which hypothesis is most likely to be true. 

Hypothesis investigation can concern

  • Factual situations – e.g. what are current Saudi oil reserves?
  • Causes – e.g. what killed the dinosaurs?
  • Functions or roles – e.g. what was the Antikythera mechanism for?
  • Future events – e.g. how will the economy be affected by Peak Oil?
  • States of mind – e.g. what are the enemy planning to do?
  • Perpetrators – e.g. Who murdered Professor Plum?

Most investigation is to some extent hypothesis-based.  The exception is situations where the outcome is pre-determined in some way (e.g., a political show trial) and the point of the investigation is simply to amass evidence supporting that determination. 

A related, though subtly different notion is that of “hypothesis driven investigation” (Rasiel, 1999), in which a single hypothesis is selected relatively early in the process, and most effort is then devoted to substantiating this hypothesis.   It is hypothesis-based investigation with all attention focused on one guess, at least while not forced to reject it and consider another. 

Hypothesis investigation is comprised of three main activities

  • Hypothesis generation – coming up with hypotheses;
  • Hypothesis evaluation – assessing relative plausibility of hypotheses given the available evidence; and
  • Hypothesis testing – seeking further evidence.

Traps in Hypothesis Investigation

Hypothesis investigation fails, at its simplest, when we get (take as true) the wrong hypothesis.  This can have dismal consequences if costly actions are then taken.  Hypothesis investigation also fails when

  • there is misplaced or excessive confidence in a hypothesis (even if it happens to be correct);
  •  no conclusion is reached, when more careful investigation might have revealed that one hypothesis was most plausible. 

There are three main traps leading to these failures.

Tunnel vision

Not considering the full range of reasonable hypotheses.   Lots of effort is put into investigating one or a few hypotheses, usually obvious ones, while other possibilities are not considered at all.  All too often one of those others is in fact the right one. 

Abusing the evidence

Here the evidence already at hand is not evaluated properly, leading to erroneous assessments of the plausibility of hypotheses.

A particular item of evidence might be regarded as stronger or more significant than it really is, especially if it appears to support your preferred hypothesis.  Conversely, a “negative” piece of evidence – one that directly undercuts your preferred hypothesis, or appears to strongly support another – is regarded as weak or worthless.    

Further, the whole body of evidence bearing upon a hypothesis might be mis-rated.  A few scraps of dismal evidence might be taken as collectively amounting to a strong case. 

Looking in the wrong places

When seeking additional evidence, you instinctively look for information that in fact is useless or at least not very helpful in terms of helping you determine the truth.

In particular we are prone to “confirmation bias,” which is seeking information that would lend weight to our favoured hypothesis.  We tend to think that by accumulating lots of such supporting evidence, we’re rigorously testing the hypothesis.  But this is a classic mistake. We need to know not only that there’s lots of evidence consistent with our favoured hypothesis, but also that there is evidence inconsistent with alternatives.   You need to seek the right kind of evidence in relation to your whole hypothesis set, rather than just lots of evidence consistent with one hypothesis.  

This can have two unfortunate consequences.  The search may be

  • Ineffective – you never find evidnce which could have very strongly ruled one or more hypotheses “in” or “out”. 
  • Inefficient – the hypothesis testing process may take much more time and resources than it really should have. 

We fall for these traps because of basic facts of human psychology, hard-wired “features” of our thinking tracing back to our evolutionary origins as hunter-gatherers in small tribal units: 

  • We dislike disorder, confusion and uncertainty.  Our brains strive to find the simple pattern that makes sense of a complex or noisy reality. 
  • We don’t like changing our minds.  We find it easier to stick with our current opinion than to upend things and take  Further, we have undue preference for hypotheses that are consistent with our general background beliefs, and so don’t force us to question or modify those beliefs.  
  • We become emotionally engaged in the issues, and build affection for one hypothesis and loathing for others.   Hypothesis investigation becomes a matter of protecting one’s young rather than culling the pack (Chamberlin, 1965).
  • Social pressure.  We become publicly committed to a position, and feel that changing our minds would mean losing face. 

And of course we are frequently under time pressure, exacerbating the above tendencies.    

General Guidelines for Good Hypothesis Investigation

Canvass a wide range of hypotheses

Our natural tendency is to grab hold of the first plausible hypothesis that comes to mind and start shaking it hard.  This should be resisted.  From the outset you should canvass as wide a range of hypotheses as you reasonably can.  It is impossible to canvass all hypotheses and absurd to even try (Maybe 9/11 was the work of the Jasper County Beekeepers!).   But you can and should keep in mind a broad selection of hypotheses, including at least some “long shots.”   In generating this hypothesis set, diversity is at least as important as quantity.

You should continue seeking additional hypotheses throughout the investigation.   Incoming information can suggest interesting new possibilities, but only if you’re in a suitably “suggestible” state of mind.   

Actively investigate multiple hypotheses

At any given time you should keep a number of hypotheses “in play”.   In hypothesis testing, i.e. seeking new information, you should seek information which discriminates which will be “telling” in relation to multiple hypotheses at once. 

Seek disconfirming evidence      

Instead of trying to prove that some hypothesis is correct, you should be trying to prove that it is false.   As philosopher Karl Popper famously observed, the best hypotheses are those that survive numerous attempts at refutation.  
Ideally, you should seek to disconfirm multiple hypotheses at the same.   This can be easier if your hypothesis set is hierarchically organised, allowing you to seek evidence knocking out whole groups of hypotheses at a time.  

Instead of trying to prove that some hypothesis is correct, you should be trying to prove that it is false.   As philosopher Karl Popper famously observed, the best hypotheses are those that survive numerous attempts at refutation.  

Ideally, you should seek to disconfirm multiple hypotheses at the same.   This can be easier if your hypothesis set is hierarchically organised, allowing you to seek evidence knocking out whole groups of hypotheses at a time.  

Structured methodologies.

Some methodologies have been developed to help with hypothesis investigation.  The methodologies have some important advantages over proceeding in an “intuitive” or spontaneous fashion. 

  • They are designed to help us avoid the traps, and do so by building in, to some extent, the general guidelines above.
  • They provide distinctive external representations which help us organize and comprehend the hypothesis sets and the evidence.   These external representations reduce the cognitive load involved in keeping lots of information related in complex ways in our heads.

Some structured methodologies are:

  • Analysis of Competing Hypotheses (Heuer, 1999), designed especially for intelligence analysis
  • Hypothesis Mapping
  • Root Cause Analysis

Read Full Post »

Print-friendly version

Hypothesis mapping (HM) is diagramming the thinking involved in hypothesis investigation. Roughly speaking, in HM we draw “box and arrows” diagrams linking our main question (e.g., “Who killed JFK?”) with hypotheses, items of evidence, supporting arguments, etc..

megacryometeors_image

For a more comprehensive version of this example see Megacryometeors Original (pdf).

 

Hypothesis investigation (HI) is determining which hypothesis is “most true” (or most likely to be true) in a given situation.  It includes generating adequate hypothesis sets, hypothesis evaluation (assessing relative plausibility of hypotheses given the evidence), and hypothesis testing (determining what evidence to obtain in order to conduct proper evaluation).

HM is an aid to diagnostic judgment.  In a simple tripartite classification of judgments, diagnostic judgment is addressing the question “What is going on?” (or “What will be going on?”)  Diagnostic processes attempt to ascertain “how things are” based on available or obtainable evidence.   (More questions which help convey what diagnostic judgement is for: What is happening? What is the problem?  What is the cause? What are they thinking? What is their strategy?)

HM makes the thinking involved in HI visual, and is thus able to exploit the massive processing power of our visual systems.

HM imposes structure on the thinking involved in HI, by requiring that information be classified and positioned on the map.

HM, when done properly, imposes discipline on the HI process.  There are rules or guidelines to be followed; there is expertise to be acquired.  HM can be done badly or it can be done well.  Doing it well requires understanding and observing the rules.

Hypothesis mapping, when done competently, promises a number of advantages relative to typical, ad hoc or informal approaches to HI.  Most importantly, it promises to improve one’s “hit rate” in HI, i.e. help you be more right more often in the conclusions you draw about what is going on.  It also aids in making the HI process more efficient and rigorous, sharing the thinking behind HI within a team, and making conclusions more defensible and accountable.

HM is a general purpose method.  It can be used in just about any domain – medicine, engineering, science, business, etc. etc.

However HM is particularly relevant to intelligence analysis.  HM should be seen as a new addition to the intelligence analyst’s toolkit.

As such, HM is an alternative to the well-known Analysis of Competing Hypotheses (ACH) method.  Of course somebody might want to use both methods, HM for some problems and ACH for others, or as two different frameworks for approaching the same problem.

HM and ACH have complementary strengths and weaknesses.  Fundamentally, HM is based on hierarchical structure, while ACH is based on matrix or table structure.  Both share the insight that bringing rigor to thinking about hypotheses requires conforming that thinking to explicit external (“outside the mind”) structures.  Any given type of structure will have certain advantages, but also certain costs.  The expert practitioner will be able to use the most appropriate tool for the task, with awareness and depth understanding of the strengths and weaknesses of the tool.

Arguably, some of the advantages of HM over ACH – particularly its more intuitive character, and more attractive visualisation – will lead to HM displacing ACH as the default tool for HI in intelligence.

HM, when done in a sophisticated fashion, involves the detailed articulation and assessment of arguments, and so draws on argument mapping.  In particular, HM cannot be done with full rigor without proper appreciation of the role of co-premises in argument structures.

The best way to do HM is to use software designed to support HM activities.  bCisive 2 is currently the leading software for HM (as well as decision mapping and argument mapping).

Austhink Consulting, which has been pioneering hypothesis mapping, provides training, facilitation and consulting services in HM.

See also:

What is Argument Mapping?
What is Decision Mapping?

Read Full Post »

Slides from a presentation at an intelligence & security seminar in Canberra last week.

Thanks to Brett Peppler for getting me the gig.

Read Full Post »

A condensed version of this has been published as Can we do better than ACH? AIPIO News, Issue 55, December 2008, pp.4-5.

The “Analysis of Competing Hypotheses” method, or ACH, is one of the most important tools on the intelligence analyst’s bench. It is a procedure for determining which of a range of hypotheses is most likely to be true, given the available evidence. At its heart is a matrix, wherein hypotheses are listed across the top, and items of evidence are listed down the left side. There is then a square or cell in the matrix corresponding to every hypothesis/evidence pair, and in that square one indicates whether the item of evidence is (in)consistent or neutral with the hypothesis. For more information on ACH, see

  • the classic chapter by Richards Heuer
  • This brief overview (pdf file) of ACH compared with argument mapping (AM). Some of the points made below are presaged in the overview.

ACH is based on some fundamental insights:

  • The network of relationships between items of evidence and hypothesis is “many-many”. That is, one piece of evidence can bear, one way or another, on many hypotheses, and an hypothesis is generally considered in the light of a number of pieces of evidence. The ACH matrix is an obvious and natural way to accommodate this web of relationships.
  • At least in many situations, structured thinking techniques yield better results than informal or intuitive “pondering”. The ACH imposes a strong structure on hypothesis testing.
  • Structured techniques are even more effective when making use of suitable external (i.e., outside the head or “on paper) representations. Thus long division in the head is hard; on paper, following a standard procedure, it is easy. The ACH matrix is an external representation aiding hypothesis testing.
  • When making informal or qualitative judgements, it is usually better to use coarse schemes such as consistent/neutral/inconsistent rather than more elaborate and precise schemes such as numerical consistency ratings.

Nevertheless, in my experience using ACH, difficulties of various sorts rapidly arise; and as the expenditures of mental effort involved in struggling with those difficulties mount up, alternatives, such as “muddling through” without the use of a tool such as ACH, or using some other tool such as argument mapping, look increasingly attractive.

(Admittedly, I’ve never tried using ACH on a real, “industrial strength” problem of the kind that, presumably, intelligence analysts are engaging with on a daily or at least weekly basis. Perhaps the difficulties don’t arise so much in real cases; or perhaps they do arise, but are more than compensated for by the various benefits of using the ACH method, given the complexity of real cases. Perhaps; but I doubt it.)

Further, I’ve heard that while some intelligence analysts use the ACH technique regularly and perhaps even enthusiastically, the majority tend not to use it unless they really have to. It seems that their perception is that ACH is not worth the effort. Presumably this can be explained in part in terms of the various difficulties discussed here.

1. Too many judgements to make.

ACH, at least in a strong form, requires that you enter a judgement of consistency for every evidence item/hypothesis pair; i.e., you have to fill in every cell in the matrix. This is both a great strength of ACH, and a serious problem. It is a strength because it makes the process of comparing hypotheses against evidence exhaustive, thereby helping ensure that the evidential weight of all the items of evidence is properly accounted for.

The trouble is that the number of separate judgements becomes very large; for example, with 20 items of evidence and 5 hypotheses, you’d have to make 100 distinct judgements, each taking some modicum of conscious mental effort. Ugh! To make matters worse, many of these judgements return a “nil” verdict. In other words, in many cases, after careful consideration you conclude that item of evidence e is neutral (“neither here nor there”) with respect to hypothesis h.

So for example, suppose you are investigating the death of Princess Diana, and you are considering hypotheses including drunk-driving accident and assassination by MI5; and that one piece of evidence is that the driver had been drinking prior to the crash. This is clearly consistent with the drunk-driving hypothesis. ACH requires you to also consider whether this item of evidence is consistent, neutral or inconsistent with the assassination hypothesis. So you consider it, and you conclude that it is neutral; it really has nothing to do with that hypothesis.

In such a case, the mental effort of making the judgement seems to have yielded no immediate progress towards the goal of assessing the relative merits of the hypotheses. Arguably, that effort has in fact yielded some value in the context of the overall process, value which becomes apparent when you look across a row (to assess diagnosticity of evidence) or down a column (to assess the plausibility of an hypothesis). But it takes serious commitment to crank through dozens of such boring judgements in pursuit of some result at the end of the process. When in the midst of the ACH procedure, being forced to consider every e in relation to every h, only to conclude that it is (in and of itself) irrelevant, is a dispiriting activity; it feels like “makework” demanded arbitrarily by a tedious and laborious process.

(2) No e is an island

Superficially, ACH treats an item of evidence as consistent or inconsistent on its own with each of the hypotheses. Thus it seems to make sense to ask whether [the driver’s drinking before the crash] is consistent with the hypothesis that [the death of Diana was a drunk-driving accident]. However this is an illusion. In fact, and always, the evidential relationship between one proposition and another is mediated by other propositions. Put another way, an item of evidence is only consistent or otherwise with an hypothesis in the context of other relevant pieces of information or assertions. Thus the drink driver’s drinking before the crash is only consistent with the drunk-driving accident hypothesis given the general background knowledge that driving under the influence of alcohol increases the chances of an accident. If this were false – if drinking improved driving – then the driver’s drinking would be inconsistent with the drunk-driving hypothesis.

In argument mapping terms, we would say that every reason or objection is actually a multi-premise structure. In the philosophy of science, we would say that observations only confirm or disconfirm hypotheses in the context of auxiliary hypotheses. Sometimes we call these additional propositions assumptions. However we cast the point, the fundamental problem is that ACH’s way of structuring the evidence, hypotheses and their relationships leaves something important out of the picture. The kernel of the problem is the matrix representation at the heart of ACH; it naturally pairs individual items of evidence with individual hypotheses, and so is ill-suited to handling the actual structure of evidential relations even in the simplest case.

Does this matter? By necessity, every graphical or structural display of the web of evidential relations must select and simplify. The question is whether a particular display is, on balance, useful. Does the display enable us to think through the issues more effectively than using our default, informal and “in the head” methods? ACH enthusiasts of course think that the tradeoff is a good one. However, I think that while choosing a way to organise evidence and hypotheses which treats items of evidence discretely and independently of other information offers short-term gains, it does so at the cost of problems further down the track.

One such situation is where an additional item of information comes in, which has the effect of undermining a co-premise/auxiliary hypothesis/assumption. To illustrate: consider the question of what caused the Permian Extinction. One hypothesis is

h: it was a massive meteor collision.

A relevant piece of evidence is that

e1: there is no known meteor impact crater of the right age.

This appears to be inconsistent with the meteor hypothesis. Later, you find out that

e2: it is possible for lava to flow back up through the hole created in a large meteor impact, erasing the impact crater.

Now, the question is, how to accommodate this new piece of information in an ACH matrix? It seems to make little sense to treat it as a new, independent piece of evidence, against which each hypothesis can be tested. So the only option left is to leave it out of the matrix, but to change the “inconsistent” rating of e1 wrt h to neutral or consistent. However without e2, such a rating is mysterious. It seems e2 has to be recorded somewhere, but the ACH matrix offers no space for it.

A better treatment of this situation is to recognise that e1 is inconsistent with h only given the natural assumption that

a: A large meteor impact would leave a crater.

e1 alone is not inconsistent with h; rather, it is the bundle [e1, a] which is inconsistent with h; or alternatively, e1 is inconsistent with [h given a]. e2 is then a challenge to a.

However we express this verbally, the fundamental problem is that you can’t adequately represent, and make sense of, what is going on here in a basic ACH format. (You can handle this sort of situation quite easily in an argument mapping format, but that is another topic.)

(3) Flat structure of hypotheses

Another major problem with ACH is that it cannot handle the hierarchical structure of hypotheses (or it can do so at best only in an ungainly and unilluminating manner).

Hypotheses can be more or less general or abstract, and a general hypothesis can have sub-hypotheses. So in the Princess Diana case, one general hypothesis is assassination and another general one is accident. The general assassination hypothesis can have sub-hypotheses such as assassination by MI5, assassination by mafia, etc..

This is important because distinct items of evidence can count for or against hypotheses at various levels. Thus a bullet hole in the limousine would count in favour of any assassination hypothesis (or at least many such hypotheses), while an internal MI5 document might count for or against only the MI5 sub-hypothesis.

The classic ACH matrix asks for all hypotheses to be entered individually across the top row, and then to be compared against all pieces of evidence. But in the case of an hierarchical structure of hypotheses, this will result in an absurd duplication of effort, in which for example a piece of evidence bearing on all assassination hypotheses is compared not only against the general assassination hypothesis but also against all its sub-cases.

(4) Subordinate deliberation

By its very nature, being based on a matrix structure, the ACH approach does not consider what is “behind” or “underneath” any given piece of evidence. From a piece of evidence, it looks “forwards” or “upwards” to its bearing on the hypotheses under consideration. However the weight of a piece of evidence wrt an hypothesis depends on information bearing upon that piece of evidence. e may be quite (in)consistent with h, but how seriously we take this (in)consistency depends on how seriously we take e itself (its plausibility or credibility). This can only be evaluated in the light of further information subordinate to e. If you like, think of e as itself an hypothesis, in relation to which there is supporting or opposing evidence. In the standard ACH framework there is no way to represent or display this layered structure. (Again, the ability to handle such structure is a strength of argument mapping.)

(5) Decontextualisation and discombobulation

We’ve seen in points 2 and 4 above that the ACH matrix does not accommodate co-premises/assumptions, or subordinate deliberation. An ACH matrix is a like a sieve on the web of evidence, letting through some items and relationships but keeping out many others. Unfortunately what is left out is the context which helps make sense of the relationship of any given item of evidence to an hypothesis. Absent that context, the judgement becomes difficult to make. For a not-very-exaggerated example, consider: Is

e: David Hicks was captured in Afghanistan

consistent, inconsistent, or neutral, with respect to:

h: David Hicks was a terrorist

The proper answer is: uh….dunno…it depends. Absent any other information, you’d probably choose neutral, but this is not because e is neutral wrt to h. It is only because without surrounding information it is hard to tell what the evidential value of e is.

ACH, in demanding that we make so many judgements even as it strips the context of those judgements away, is constantly asking us to engage in these sorts of mentally taxing, even discombobulating exercises. After an extended bout of ACH, I tend to feel a bit dazed and confused, and have to stave off that feeling with redoubled mental effort to see the sense of the judgements I’m making.

Summary

We might reduce my complaints about ACH to two:

  1. ACH asks us to make too many distinct judgements; and
  2. Those judgements are emaciated due to the stripping away of relevant context, of both hypotheses and evidence

These problems are deeply related to choice of structure of external representation, i.e., to the choice of a matrix as the way to organise evidence, hypotheses, and judgements. I’m inclined to think that the use of the matrix is the fatal mistake of ACH; it is a commitment which seems obvious and natural initially, but as things unfold the limitations and problems inherent in the matrix structure come to the fore.

Much of the ACH procedure, as outlined for example by Heuer, could be retained even if the matrix structure was dispensed with in favour of some richer, more flexible format. But if you throw out the matrix, you also must throw out all those further aspects of the classic ACH which only applied if a matrix was being used. It is doubtful that what you’d have left would be worth calling ACH.

If you wanted to replace the ACH matrix, what would you use? One candidate is the argument map (of the kind you can create in, for example, the Rationale software). However while these have some strengths, for hypothesis testing they have a complementary set of weaknesses. At Austhink we are working on a new structure, one which will take and blend the best elements of both ACH and argument mapping, thus superseding them both. This new structure enables users to rapidly and intuitively assemble a (hierarchical) set of hypotheses in relation to some issue, items of evidence bearing on multiple hypotheses, assumptions, subordinate considerations, etc..

Read Full Post »