Feeds:
Posts
Comments

Archive for the ‘Austhink’ Category

Q: Can argument mapping be used in strategic planning?

A: Of course! – because strategic planning involves complex arguments, and argument mapping can help whenever you have to deal with complex arguments.

However to move beyond that sort of trite proclamation, it is useful to have concrete examples of how argument mapping can enhance a strategic planning process.

Austhink recently providing mapping expertise for a major Australian organisation developing its strategic outlook for a nominated date of 2030. In order to do detailed planning, leading to major decisions such as investing many billions of dollars in human resources and equipment, it had to first develop a conception of what its “operating environment” would be in 2030 and how the organisation would be able to achieve competitive advantage in that environment. The team developing this conception had drafted a document laying it out, including seven hypotheses as to how the organisation would be able to achieve advantage, with arguments to support the hypotheses. Necessarily these hypotheses and arguments were quite abstract, intended as they were to cover a wide range of scenarios.

Parenthetically, it is worth emphasizing how difficult this task is. We all know how rapidly the world is changing in all sort of respects (technology, geopolitics, climate etc.), and how unpredictable that change is. The more you try to say anything reasonably definite and useful about the 2030s, the more they appear to be hidden in a dense fog of uncertainty. Yet this organisation – like so many others – can’t just throw up its hands. It has to make conceptual and predictive commitments with very high stakes, for the organisation itself and indeed far beyond it.

Having developed a draft strategic conception, the organisation is now putting it through a fairly elaborate process of “stress testing”. This raises the question – how do you “put to the test” sets of arguments relating to highly abstract and intrinsically speculative propositions? Their idea, in essence,was to

  1. Articulate the arguments with as much clarity and rigor as possible
  2. With the help of a broad selection of domain experts, in a series of workshops, identify strengths and weaknesses, including
    Gaps – places where key arguments are missing, or more substantiation is needed;
    Assumptions – especially “hidden” assumptions, i.e. ones you haven’t realized you’ve been making;
    Objections and challenges
  3. Use the findings to guide further development of the thinking

Developing good-quality argument maps in complex, murky territory is a challenging business. It involves getting sufficient clarity about what the issues are, and what arguments you have, and how they “hang together,” to be able to represent those issues and arguments in diagrams following the rules of argument mapping – which are really just fundamental principles of good logical thinking. It is inevitably an iterative process, with each draft resolving some matters but opening others for exploration.

In what follows, I’ll briefly recap this iterative process for just one of the seven argument maps we developed.  (Sorry that the illustrations are unreadable – this is deliberate to preserve confidentiality.)

As is typically the case, the arguments as we first encountered them were presented in standard prose:

I’ve discussed elsewhere how difficult it is to identify complex arguments in standard prose presentations, even when those arguments have been developed and written out by the sharpest of legal minds. In this case we were unsurprised to encounter the usual sorts of problems:

  • Arguments pertaining to a particular hypothesis were scattered in various places around the document and interspersed with other not-directly-related material.
  • The arguments were difficult to pin down, often because they were largely implicit.
  • The arguments were easy to misunderstand, if indeed one didn’t miss them altogether.
  • Consequently it was difficult to evaluate the arguments (i.e., judge with any confidence how effectively they supported the hypothesis).

In the first workshop with domain experts, we used real-time facilitated argument mapping with bCisive in an attempt to pin down and elaborate the main arguments, resulting in:

Many useful ideas had come out, but as you can see from the wide flat layout, were still struggling to find an appropriate overall structure. At this stage the map is poorly organised and missing a lot, but at least we could see more clearly what we had and how one thing supposedly relates to another.

We took the maps from the first workshop away and did some reworking, relying mostly on our generic argument mapping expertise (and only a little on commonsense and general knowledge of the domain). What emerged was a basic structure with more coherence, simplicity, and even elegance:

The overall structure is starting to emerge. Now we can distinguish between the higher level (more general, abstract) arguments and their lower-level supporting arguments. This “macro” is the structural “coat hanger” on which the rest can hang. This basic structure was now stable through the remaining iterations.

Aside: this was consistent with what I think of as one of the more profound insights I’ve derived from my years of experience with argument mapping: that complex arguments have a “true” form, a form which is (a) determined by the fundamental principles of good thinking meshing with the underlying reality of the issues, and (b) which uncoverable by patient reworking of the argument under the “rules” or guidelines of argument mapping.

During second workshop, a small number of valuable additions were made to the map:

But more importantly, participants used a “grouputer” system to jot down lots of additional ideas, which we took away and sorted and integrated into another reworked version of the map:

What we can now see emerging is a richer and more articulated sense of the case bearing on the hypothesis. We can clearly see both major lines of supporting argument. We know which claims have been supported and which have not. We can see key objections or warnings (little red blobs in the graphic above). We can see numerous places where unstated assumptions are lurking.

A map like this positions us well to make a provisional judgement as to how well the hypothesis (the main contention in the map) is supported. It also helps one see the numerous things one could do to further elaborate the thinking and develop greater confidence in that judgement. From the standpoint afforded by this map, it is clear that the arguments as originally presented simply couldn’t be properly evaluated. When you have only a very fuzzy sense of what the arguments are, you can have at best only a fuzzy sense of whether they are any good. You are then more likely to be guided by prejudice, bias, habit, instinct or “conventional wisdom”.

Advertisements

Read Full Post »

Think of a collection of people as having a kind of collective mind.  How can you find out what that collective mind believes?

That may sound like a fanciful philosophical question, but it has very real, even urgent applications.  For example the IPCC is a collection of hundreds of scientists, and they put out reports supposedly embodying their consensus position – i.e. what they believe not as individuals but as a body of scientists.

There are of course already various methods for determining what people believe collectively; the IPCC have their own approach.   Such methods have various strengths and weaknesses.  For example, the IPCC approach is claimed to be riddled with political conflict.

A little while back, at Austhink, we came up with an alternative approach, which worked successfully in its first application.  We have used it a number of times since with various organisations, calling it the “Wisdom of Groups” method.

Here is a write-up of the first deployment.

___________________________

A few years back, the National Centre for Education and Training on Addiction and the South Australian Department of Health and Human Services put together a 3 day “Summer School” on the topic of addictions, inequalities and their interrelationships, with a view to providing guidance for policy makers.  They said that 20% of the South Australian budget is used to deal with problems of addiction, so this is a major issue.  They hoped to come up with a kind of Position Statement, which would summarise the consensus, if any, that the group of 50 or so participants reached during the Summer School.

They contacted Austhink hoping that we’d be able to help them with one aspect of it, namely making any debate/discussion/rational deliberation more productive. So initially the idea was that live argument mapping facilitation would be used with the whole group to help them work through some issues. But it became clear that they were open to ideas about how the Position Statement would be developed, and our involvement was increased to one of (a) developing a process for developing a Position Statement representing the group consensus, and (b) helping facilitate the overall Summer School to produce that Statement.

So we suddenly found ourselves faced with a very interesting challenge, which boiled down to:

  1. how do you figure out what, if anything, 50 participants with diverse backgrounds, interests, professional specializations, ideologies etc.. agree on:? i.e., how do you actually come up with a draft Position Statement?
  2. how do you rationally deliberate over that draft?
  3. how do you measure the degree to which the participants do in fact agree with any given aspect of that Statement – i.e., the extent to which the resulting draft Position Statement does in fact represent the consensus of the group?

This challenge has a lot in common with problems of democratic participation, of the sort that Deliberative Polling is intended to deal with.

Our approach, in a nutshell, was this:

Phase 1: Developing a Draft Statement

The first two days were occupied mostly with presentations by a range of experts (this had already been set up by the organizers; we had to work around that). We divided the Position Statement into three categories:

  • Definitions and Values
  • Empirical Realities; and
  • Directions.

At the end of the first day, participants filled out a worksheet asking them to nominate 5 distinct propositions in the Definitions and Values category, propositions which they regarded as true and worth including in any Position Statement. On the second day, they filled out similar workshops for Empirical Realities and Directions. Then, for each category, Paul Monk and I spent a few hours synthesizing the proposed propositions into a set of around 10 candidate statements for inclusion in the Position Statement. This involved sorting them into groups and then extracting the core proposition from each group. So this resulted in a set of about 32 statements. Note, however:

  • This process was democratic, in that it treated everyone’s contribution pretty much equally.
  • Nevertheless, no one of these statements was put forward by a majority of people. It simply wasn’t clear to what extent these statements represented the consensus view of the whole group.
  • Third, and somewhat parenthetically, it is worth noting that, in most cases, it was apparent to Paul and I that most participants had only a very partial and idiosyncratic understanding of the material they had been presented with.   The synthesized statement sets, however, were (in our opinion) very good “takes” on the material.  In other words, unsurprisingly, 50 brains really are a lot better than one (in most cases).  The trouble is synthesizing the thinking of 50 brains.

Phase 2: Deliberation

Half of day three was devoted to deliberating over selected statements, using real-time argument mapping with the Reasonable software. A whole-group session introduced the participants to the approach, and made some progress on a particular issue. In another session there were two groups with separate facilitators; each chose their own issues to debate.

Phase 3: Consensus

In the final phase all participants used an online questionnaire to register their attitudes towards each of the 32 propositions. Each participant was asked, for each statement, to choose Agree/Disagree/Abstain, Include/Exclude/Abstain, and was able to offer comments. The online system automatically and immediately collated the results, producing graphical (bar chart) displays of the level of consensus.

In the final session, the results were reviewed. We found that there was a surprisingly high level of consensus on almost all propositions; that, in other words, the draft Position Statement did in fact represent a consensus opinion of the group. Note also that the Position Statement is accompanied by a precise specification of the extent to which the group in fact thought that each component statement was (a) true, and (b) worth including.

The level of consensus regarding the Position Statement developed through the process is particularly noteworthy in light of the fear, expressed to us prior to the Summer School, that there would be such disagreement between the major groupings of participants (roughly, the “addictions” people, and the “inequalities” people), that there would be literally nothing (or nothing worth saying) that they could agree on.

We think that this technologically-augmented (in two ways: argument mapping software+projection, and online questionnaire) process could well be deployed again in a wide range of contexts in which groups get together and need to figure out what they think on a complex issue; and in particular, need to figure out what, if anything, they can agree on to form a basis for policy.

Read Full Post »

[originally posted to BlogCisive]

To a first approximation, all deliberative judgements (i.e., those that turn on to-some-degree careful consideration of the relevant arguments) can be usefully sorted into three kinds.

These are the three Ds of judgement.

1. Decision

Decision is a matter of choosing from among options, particularly where those options are possible actions.  The question here is “What should I (we) do?”

2. Diagnosis

Diagnostic judgements concern what is going on.   The question is “What is happening?” or “What’s the situation?”  The term diagnosis has medical connotations, but here I’m widening its use to include various kinds of investigation, hypothesis testing,  and problem-solving.  All diagnostic judgements involve hypotheses (conjectures) as to what is actually happening.  A good example of diagnostic judgement in this sense is the assessment in intelligence analysis.

3. Debate

Debate is trying to determine the truth of some proposition by presenting the arguments for or against it.  The question is “Is it true?”

Austhink has two products – Rationale, and bCisive.  Rationale, the argument mapping tool, supports debate.  bCisive, the business decision mapping tool, has been positioned as supporting decision.  We haven’t had a tool for diagnosis, and have tended to recommend that people wanting to make diagnostic judgements use some variant of the “Analysis of Competing Hypotheses” (ACH) method.

However, just as argument mapping supports debate, and business decision mapping supports decision, so “hypothesis mapping,” an alternative to ACH, supports diagnosis.  Further, hypothesis mapping is quite easily handled in bCisive as it stands.

Austhink is currently working on a “Pro” version of bCisive which will include crucial features needed for supporting both deliberation and diagnosis.

This means that one tool will help users map the thinking behind all three major kinds of deliberative judgement.

The tool should be available in a few months.

Read Full Post »

 Check it out

Read Full Post »

Tonight Andy Bulka (our software architect) and I went to the “ICT Panorama” event at the University of Melbourne Computer Science and Software Engineering Department.

Each year, 4th year students in the department are divided into teams who work on innovative projects for “real world” clients.  Austhink Software was assigned a team, code-named “Got Code.”  Over the past 6 months or so the team has been working on a “Web 2.0” version of Rationale.  This consisted of a simple Flash version of the product (“Rationale Lite”) and an associated Flickr-type website for sharing Rationale maps, called Bickr.  A nice feature is that in Bickr you can edit maps online using Bickr (imagine if, in Flickr, you could edit an image using a stripped-down Photoshop). 

Other projects included a 3D Tetris, a neural-networks based system for predicting foreign exchange rates, and a system for playing a kind of ping-pong (using a real table) with a remote opponent. 

At the ICT Panorama event, all the teams display their projects.  They are judged not only on the quality of their work but also on how professionally they present it.  Three judges observe all projects, without giving away to the teams that they are judges.

A prize is awarded to the best project.  Got Code won…  Congratulations to the team, but also to Andy who managed them pretty closely.

We’ll be making Rationale Lite and/or Bickr available just as soon as we feasibly can. 

Powered by ScribeFire.

Read Full Post »

Brief mention of Austhink Software in The States or Bust in The Age and the Sydney Morning Herald today.  (Don’t be scared off by the ugly visages.)

Read Full Post »

At a number of universities around the world, people are now setting up studies to help determine the extent to which argument maps, or Rationale use, can help build skills or improve performance on difficult tasks.

One such person asked in an email:  “What is the average time for an adult learner to complete the building of an argument in Rationale?”

Unfortunately there is no simple answer to that question.  The time needed can vary enormously, depending on factors such as:

  • how complex is the argument?
  • are they coming up with their own argument (easier) or trying to map out an argument from a text written by somebody else (generally quite hard)?
  • how exhaustive and correct should the maps be?  Should all principles of argument mapping properly observed?  Or is “anything which looks good enough to them” acceptable?

On one hand, a simple argument of their own, done sloppily, should take only a minute or two.  But mapping an argument from, say, a journal article or opinion piece, and doing it properly, can take hours, even for somebody highly skilled.  And at the other extreme, mapping a truly complex body of argumentation can take months, even years.  Austhink is just completing an argument mapping assignment for a government department, looking at all the arguments surrounding a controversial major equipment purchase.  This has taken about four months with two people working on it.  Consider also the “mother of all maps,” Robert Horn’s Can Computers Think? series of maps, which took a team of people a number of years.

In practice, most non-specialists have only a finite appetite for mapping arguments, and limited capacity to apprehend deficiencies in their own work, and so are unlikely to spend more than, say, 1/2 an hour on any given task.  So, returning to the original question, here’s a very rough guess:

  • Simple tasks (e.g., coming up with one, single-reason argument for a claim) – allow a few minutes per task
  • Complex tasks – allow around half an hour, or more

Read Full Post »

Older Posts »