Feeds:
Posts
Comments

Archive for the ‘Uncategorized’ Category

Anyone familiar with this blog knows that it frequently talks about argument mapping.  This is because, as an applied epistemologist, I’m interested in how we know things.  Often, knowledge is a matter of arguments and evidence.  However, argumentation can get very complicated.  Argument mapping helps our minds cope with that complexity by providing (relatively) simple diagrams.

Often what we are seeking knowledge about is the way the world works, i.e. its causal structure.  This too can be very complex, and so its an obvious idea that “causal mapping” – diagramming causal structure – might help in much the same way as argument mapping.  And indeed various kinds of causal diagrams are already widely used for this reason.

What follows is a reflection on explanation, causation, and causal diagramming.  It uses as a springboard a recent post on blog of the Lowy Institute which offered a causal explanation of the popularity of Russian president Putin.  It also introduces what appears to be a new term – “causal storyboard” – for a particular kind of causal map.


 

In a recent blog post with the ambitious title “Putin’s Popularity Explained,” Matthew Dal Santo argues that Putin’s popularity is not, as many think, due to brainwashing by Russia’s state-controlled media, but to the alignment between Putin’s conservative policies and the conservative yearnings of the Russian public.

Dal Santo dismisses the brainwashing hypothesis on very thin grounds, offering us only “Tellingly, only 34% of Russians say they trust the media.” However professed trust is only weakly related to actual trust. Australians in surveys almost universally claim to distrust car salesmen, but still place a lot of trust in them when buying a car.

In fact, Dal Santo’s case against the brainwashing account seems to be less a matter of direct evidence than “either or” reasoning: Putin’s popularity is explained by the conservatism of the public, so it is not explained by brainwashing.

He does not explicitly endorse such a simple model of causal explanation, but he doesn’t reject it either, and it seems to capture the tenor of the post.

The post does contain a flurry of interesting numbers, quotes and speculations, and these can distract us from difficult questions of explanatory adequacy.

The causal story Dal Santo rejects might be diagrammed like this:

putin1

The dashed lines indicate the parts of the story he thinks are not true, or at least exaggerated. Instead, he prefers something like:

putin2
However the true causal story might look more like this:

putin3.jpg

Here Putin’s popularity is partly the result of brainwashing by a government-controlled media, and partly due to “the coincidence of government policies and public opinion.”

The relative thickness of the causal links indicate differing degrees to which the causal factors are responsible. Often the hardest part of causal explanation is not ruling factors in or out, but estimating the extent to which they contribute to the outcomes of interest.

Note also the link suggesting that a government-controlled media might be responsible, in part, for the conservatism of the public. Dal Santos doesn’t explicitly address this possibility but does note that certain attitudes have remained largely unchanged since 1996. This lack of change might be taken to suggest that the media is not influencing public conservatism. However it might also be the dog that isn’t barking. One of the more difficult aspects of identifying and assessing causal relationships is thinking counterfactually. If the media had been free and open, perhaps the Russian public would have become much less conservative. The government-controlled media may have been effective in counteracting that trend.

The graphics above are examples of what I’ve started calling causal storyboards. (Surprisingly, at time of writing this phrase turns up zero results on a Google search.) Such diagrams represent webs of events and states and their causal dependencies – crudely, “what caused what.”

For aficionados, causal storyboards are not causal loop diagrams or cognitive maps or system models, all of which represent variables and their causal relationships.  Causal loop diagrams and their kin describe general causal structure which might govern many different causal histories depending on initial conditions and exogenous inputs.  A causal storyboard depicts a particular (actual or possible) causal history – the “chain” of states and events.  It is an aid for somebody who is trying to understand and reason about a complex situation, not a precursor to quantitative model.

Our emerging causal storyboard surely does not yet capture the full causal history behind Putin’s popularity. For example it does not incorporate any additional factors, such as his reputed charisma. Nor does it trace the causal pathways very far back. To fully understand Putin’s popularity, we need to know why (not merely that) the Russian public is so conservative.

The causal history may become very complex. In his 2002 book Friendly Fire, Scott Snook attempts to undercover all the antecedents of a tragic incident in 1994 when two US fighter jets shot down two US Army helicopters. There were dozens of factors, intricately interconnected. To help us appreciate and understand this complexity, Snook produced a compact causal storyboard:

148914-151707.png

To fully explain is to delineate causal history as comprehensively and accurately as possible. However, full explanations in this sense are often not available. Even when they are, they may be too complex and detailed. We often need to zero in on some aspect of the causal situation which is particularly unusual, salient, or important.

There is thus a derivative or simplified notion of explanation in which we highlight some particular causal factor, or small number of factors, as “the” cause. The Challenger explosion was caused by O-ring leaks. The cause of Tony Abbott’s fall was his low polling figures.

As Runde and de Rond point out, explanation in this sense is a pragmatic business. The appropriate choice of cause depends on what is being explained, to whom, by who, and to what purpose.

In an insightful discussion of Scott Snook’s work, Gary Klein suggests that we should focus on two dimensions: a causal factor’s impact, and the ease with which that factor might have been negated, or could be negated in future. He uses the term “causal landscape” for a causal storyboard analysed using these factors. He says: “The causal landscape is a hybrid explanatory form that attempts to get the best of both worlds. It portrays the complex range and interconnection of causes and identifies a few of the most important causes. Without reducing some of the complexity we’d be confused about how to act.”

This all suggests that causes and explanations are not always the same thing. It can make sense to say that an event is caused by some factor, but not fully explained by that factor. O-ring failure caused the Challenger explosion, but only partially explains it.

More broadly, it suggests a certain kind of anti-realism about causes. The world and all its causal complexity may be objectively real, but causes – what we focus on when providing brief explanations – are in significant measure up to us. Causes are negotiated as much as they are discovered.

What does this imply for how we should evaluate succinct causal explanations such as Dal Santo’s? Two recommendations come to mind.

First, a proposed cause might be ill-chosen because it has been selected from underdeveloped causal history. To determine whether we should go along, we should try to understand the full causal context – a causal storyboard may be useful for this – and why the proposed factor has been selected as the cause.

Second, we should be aware that causal explanation can itself be a political act. Smoking-related lung cancer might be said to be caused by tobacco companies, by cigarette smoke, or by smoker’s free choices, depending on who is doing the explaining, to whom, and why. Causal explanation seems like the uncovering of facts, but it may equally be the revealing of agendas.

Read Full Post »

In our consulting work we have periodically been asked to review how judgments or decisions of a particular kind are made within an organisation, and to recommend improvements.  This has taken us to some interesting places, such as the rapid lead assessment center of a national intelligence agency, and recently, meetings of coaches of an elite professional sports team.

On other occasions, we have been asked to assist a group to design and build, more or less from scratch, a process for making a particular decision or set of decisions (e.g., decisions as to what a group should consider itself to collectively believe).

Both types of activity involve thinking hard about what the current/default process is or would be, and what kind of process might work more effectively in a given real-world context, in the light of what academics in fields such as cognitive science and organisational theory have learned over the years.

This sounds a bit like engineering.  My favorite definition of the engineer is somebody who can’t help but think that there must be a better way to do this.  A more comprehensive and workmanlike definition is given by Wikipedia:

Engineering is the application of scientific, economic, social, and practical knowledge in order to invent, design, build, maintain, research, and improve structures, machines, devices, systems, materials and processes.

The activities mentioned above seem to fit this very broad concept: we were engaged to help improve or develop systems – in our case, systems for making decisions.

It is therefore tempting to describe some of what we do as decision engineering.  However this term has been in circulation for some decades now, shown in this Google n-gram:

decisionengineering

and its current meaning or meanings might not be such a good fit with our activities.  So, I set about exploring what the term means “out there”.

As usual in such cases, there doesn’t appear to be any one official, authoritative definition.  Threads appearing in various characterizations include:

While each such thread clearly highlights something important, my view is that individually they are only part of the story, and collectively are a bit of a dog’s breakfast.  What we need, I think, is a more succinct, more abstract, and more unifying definition.  Here’s an attempt, based on Wikipedia’s definition of engineering:

Decision engineering is applying relevant knowledge to design, build, maintain, and improve systems for making decisions.

Relevant knowledge can include knowledge of at least three kinds:

  • Theoretical knowledge from any relevant field of inquiry;
  • Practical knowledge (know-how, or tacit knowledge) of the decision engineer;
  • “Local” knowledge of the particular context and challenges of decision making, contributed by people already in or familiar with the context, such as the decision makers themselves.

System is of course a very broad term, and for current purposes a system for making decisions, or decision system, is any complex part of the world causally responsible for decisions of a certain category.  Such systems may or may not include humans.  For example, decisions in a Google driverless car would be made by a complex combination of sensors, on-board computing processors, and perhaps elements outside the car such as remote servers.

However the decision processes we have worked on, which might loosely be called organisational decision processes, always involve human judgement at crucial points.  The systems responsible for such decisions include

  • People playing various roles
  • “Norms,” including procedures, guidelines, methods, standards.
  • Supporting technologies ranging from pen and paper through sophisticated computers
  • Various aspects of the environment or context of decision making.

For example, a complex organisational decision system produces the monthly interest rate decisions of the Reserve Bank of Australia, as hinted at in this paragraph from their website:

The formulation of monetary policy is the primary responsibility of the Reserve Bank Board. The Board usually meets eleven times each year, on the first Tuesday of the month except in January. Hence, the dates of meetings are well known in advance. For each meeting, the Bank’s staff prepare a detailed account of developments in the Australian and international economies, and in domestic and international financial markets. The papers contain a recommendation for the policy decision. Senior staff attend the meeting and give presentations. Monetary policy decisions by the Reserve Bank Board are communicated publicly shortly after the conclusion of the meeting.

and described in much more detail in this (surprisingly interesting) 2001 speech by the man who is now Governor of the Reserve Bank.

In most cases, decision engineering means taking an existing system and considering to how improve it.  A system can be better in various ways, including:

  • First and foremost, improving the decision hit rate, i.e. the proportion of decisions which are correct in the sense of choosing an optimal or at least satisfactory path of action;
  • More efficient in the sense of using less resources or producing decisions more quickly
  • More transparent or defensible.

Now, in order to improve a particular decision system, a decision engineer might use approaches such as:

  • Bringing standard engineering principles and techniques to bear on making decisions
  • Using more structured decision methods, including the application of decision analysis techniques
  • Basing decisions on “big data” and “data science,” such as predictive analytics

(i.e., the “threads” listed above).  However the usefulness of these approaches will depend very much on the nature of the decision challenges being addressed.  For example, if you want to improve how elite football coaches make decisions in the coaching box on game day, you almost certainly will not introduce highly structured decision methods such as decision trees.

In short, I like this more general definition of decision engineering (in four words or less, building better decision systems) because it seems to get at the essence of what decision engineers do, allowing but not requiring that highly technical, quantitative approaches might be used.  And it accommodates my instinct that much of what we do in our consulting work should indeed count as as a kind of engineering.

Whether we would be wise to publicly describe ourselves as decision engineers is however quite another question – one for marketers, not engineers.

Read Full Post »

There’s a familiar idea from the world of sport – that winning requires an elite team and not just a team of elite players.

Does something similar apply in the world of decision making?

In many situations, critical decisions are made by small groups.  The members of these groups are often “elite” in their own right.  For example, in Australia monthly interest rate decisions are made by the board of the Reserve Bank of Australia.  This is clearly a “team” of elite decision makers.

However it is not clear that they are an elite team of decision makers.   For current purposes, I define an elite decision team as a small decision group conforming to all or at least most of the following principles:

  1. The team operates according to rigorously thought-through decision making practices. Wherever possible these practices should be strongly evidence-based.
  2. The team has been trained to operate as a team using these practices. Members have well-defined and well-understood roles.
  3. Members have been rigorously trained as decision makers (and not just as, say, economists).
  4. The team, and members individually, are rigorously evaluated for their decision making performance.
  5. There is a program of continuous improvement.

Note also that the team should be a decision making team, i.e. one that makes decisions (commitments to courses of action) rather than judgements of some other kind such as predictions.

There are many types of teams which do operate according to analogs of these principles – for example elite sporting teams, as mentioned, and small military teams such as bomb disposal squads.  These teams’ operations involve decision making, but they are not primarily decision making teams.

I doubt the Board of the RBA is an elite decision team in this sense, but would be relieved to find out I was wrong.

More generally, I am currently looking for good examples of elite decision teams.  Any suggestions are most welcome.

Alternatively, if you think this idea of an elite decision team is somehow misconceived, that would be interesting too.

Read Full Post »

Well-known anti-theist Sam Harris has posted an interesting challenge on his blog.  He writes:

So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000,* and I will publicly recant my view. 

In the previous post on this blog, Seven Habits of Highly Critical Thinkers, habit #3 was Chase Challenges.  If nothing else, Harris’ post is a remarkable illustration of this habit.

The quality of his case is of course quite another matter.

I missed the deadline for submission, and I haven’t read the book, and don’t intend to, though it seems interesting enough. So I will just make a quick observation about the quality of Harris’ argument as formulated.

In a nutshell, simple application of argument mapping techniques quickly and easily show that Harris’ argument, as stated by Harris himself on the challenge blog page, is a gross non-sequitur, requiring, at a minimum, multiple additional premises to bridge the gap between his premises and his conclusions.  In that sense, his argument as stated is easily shown to be seriously flawed.

Here is how Harris presents his argument:

1. You have said that these essays must attack the “central argument” of your book. What do you consider that to be?
Here it is: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

This formulation is short and clear enough that creating a first-pass argument map in Rationale is scarcely more than drag and drop:

Harris2

Now, as explained in the second of the argument mapping tutorials, there are some basic, semi-formal constraints on the adequacy of an argument as presented in an argument map.

First, the “Rabbit Rule” decrees that any significant word or phrase appearing in the contention of an argument must also appear in at least one of the premises of that argument.  Any significant word or phrase appearing in the contention but not appearing in one of the premises has suddenly appeared out of thin air, like the proverbial magician’s rabbit, and so is informally called a rabbit.  Any argument with rabbits is said to commit rabbit violations.

Second, the Rabbit Rule’s sister, the “Holding Hands Rule,” decrees that any significant word or phrase appearing in one of the premises must appear either in the contention, or in another premise.

These rules are aimed at ensuring that the premises and contention of an argument are tightly connected with each other.  The Rabbit Rule tries to ensure that every aspect of what is claimed in the contention is “covered” in the premises.  If the Rabbit Rule is not satisfied, the contention is saying something which hasn’t been even discussed in the premises as stated.  (Not to go into it here, but this is quite different from the sense in which, in an inductive argument, the contention “goes beyond” the premises.) The Holding Hands Rule tries to ensure that any concept appearing in the premises is doing relevant and useful work.

Consider then the basic argument consisting of Contention 1 and the premises beneath it.   It is obvious on casual inspection that much – indeed most – of what appears in Contention 1 does not appear in the premises.  Consider for example the word “purview”, or the phrase “falls within the purview of science”.  These do not appear in the premises as stated. What does appear in Premise 2 is “natural phenomena, fully constrained by the laws of the universe”.  But as would be obvious to any philosopher, there’s a big conceptual difference between these.

What Harris’ argument needs, at a very minimum, is another premise.  My guess is that it is something like “Anything fully constrained by the laws of the universe falls within the purview of science.”   But two points.  First, this suggested premise obviously needs (a) explication, and (b) substantiation.  In other words, Harris would need to argue for it, not assume it. Second, it may not be the Harris’ preferred way of filling gaps (one of them, at least) between his premises and his conclusion.  Maybe he’d come up with a different formulation of the bridging premise.  Maybe he addresses this in his book.

It would be tedious to list and discuss the numerous Rabbit and Holding Hands violations present in the two basic arguments making up Harris’ two-step “proof”.   Suffice to say, that if both Rabbit Rule and Holding Hands Rule violations are called “rabbits” (we also use the term “danglers”), then his argument looks a lot like the famous photo of a rabbit plague in the Australian outback:

australianrabbits

Broadly speaking, fixing these problems would require quite a bit of work:

  • refining the claims he has provided
  • adding suitable additional premises
  • perhaps breaking the overall argument into more steps.

Pointing this out doesn’t prove that his main contentions are false.  (For what little it is worth, I am quite attracted to them.)  Nor does it establish that there is not a solid argument somewhere in the vicinity of what Harris gave us. It doesn’t show that Harris’ case (whatever it is) for a scientific understanding of morality is mistaken.  What it does show is that his own “flagship” succinct presentation of his argument (a) is sloppily formulated, and (b) as stated, clearly doesn’t establish its contentions.   In short, as stated, it fails.  Argument mapping reveals this very quickly.

Perhaps this is why, in part, there is so much argy bargy about Harris’ argument.

Final comment: normally I would not be so picky about how somebody formulated what may be an important argument.  However in this case the author was pleading for criticism.

Read Full Post »

Good grief! Who writes this stuff?

“Strategy ideation, formulation, and execution is essential for executives looking to drive business in today’s economy. At Columbia Business School Executive Education, we understand this need. To this end, we offer programs that equip executives with a range of tools and frameworks to define and implement their organization’s immediate and long-term strategies.”

Read Full Post »

Head First Rails is regarded as the best introductory tutorial-style book on Ruby on Rails for n00bs.  Except for one major problem – it was written for Rails 2.x, rather than the current Rails 3.x.  Many commands and piece of code in the book are outdated and this can lead to much frustration and wasted time for people (like me) earnestly trying to work their way through it.

I’ve created a list of modifications needed to make Head First Rails work with Rails 3.x.  It started out as just my own notes but decided to make it public since nobody else seems to have done this (including, disappointingly, the publishers and authors).

The list is incomplete and may contain errors.  If you can help improve it please let me know.

BTW for any Rails beginners working through Head First Rails, I strongly recommend the Rails for Zombies online tutorials.  They work very well in parallel with HFR.

(If any regular reader of this blog is interested why I’m trying to learn Rails, I’m just trying to get my head around it to help me in driving our new project, YourView.  More on that soon.)

Read Full Post »

I’ve recently noticed some interesting examples of “argument infographics” – graphics designed to convey complex arguments to wide audiences in accessible and attractive manner.  Here are two:

    

(Click the thumbnails to see full-size versions.)

The purist in me wants to say that these are argument infographics rather than argument maps, properly-so-called.    An argument map displays the logical (evidential, inferential) relationships among components in a complex argument, typically using box-and-arrow format.  The relationships displayed with boxes-and-arrows in these infographics are not always logical in this sense.

This is easiest to see in the Seven Good Reasons infographic (on the right, above), where arrows between boxes simply indicate order or progression (first this argument, then this one…).  There is no logical coherence in the linking of one argument to the next.

Still, if these argument infographics are effective in helping people understand the arguments, then they’re a good thing.  And if there is a trend towards the visual display of complex argument – even if in a “merely” infographical way – then that’s a good thing too.

Indeed it is possible that a well-crafted argument infographic may be a better way to communicate complex arguments than a true argument map, the virtues of which may not be apparent to the general reader.

Read Full Post »

Older Posts »