Feeds:
Posts
Comments

Archive for the ‘Explanation’ Category

Anyone familiar with this blog knows that it frequently talks about argument mapping.  This is because, as an applied epistemologist, I’m interested in how we know things.  Often, knowledge is a matter of arguments and evidence.  However, argumentation can get very complicated.  Argument mapping helps our minds cope with that complexity by providing (relatively) simple diagrams.

Often what we are seeking knowledge about is the way the world works, i.e. its causal structure.  This too can be very complex, and so its an obvious idea that “causal mapping” – diagramming causal structure – might help in much the same way as argument mapping.  And indeed various kinds of causal diagrams are already widely used for this reason.

What follows is a reflection on explanation, causation, and causal diagramming.  It uses as a springboard a recent post on blog of the Lowy Institute which offered a causal explanation of the popularity of Russian president Putin.  It also introduces what appears to be a new term – “causal storyboard” – for a particular kind of causal map.


 

In a recent blog post with the ambitious title “Putin’s Popularity Explained,” Matthew Dal Santo argues that Putin’s popularity is not, as many think, due to brainwashing by Russia’s state-controlled media, but to the alignment between Putin’s conservative policies and the conservative yearnings of the Russian public.

Dal Santo dismisses the brainwashing hypothesis on very thin grounds, offering us only “Tellingly, only 34% of Russians say they trust the media.” However professed trust is only weakly related to actual trust. Australians in surveys almost universally claim to distrust car salesmen, but still place a lot of trust in them when buying a car.

In fact, Dal Santo’s case against the brainwashing account seems to be less a matter of direct evidence than “either or” reasoning: Putin’s popularity is explained by the conservatism of the public, so it is not explained by brainwashing.

He does not explicitly endorse such a simple model of causal explanation, but he doesn’t reject it either, and it seems to capture the tenor of the post.

The post does contain a flurry of interesting numbers, quotes and speculations, and these can distract us from difficult questions of explanatory adequacy.

The causal story Dal Santo rejects might be diagrammed like this:

putin1

The dashed lines indicate the parts of the story he thinks are not true, or at least exaggerated. Instead, he prefers something like:

putin2
However the true causal story might look more like this:

putin3.jpg

Here Putin’s popularity is partly the result of brainwashing by a government-controlled media, and partly due to “the coincidence of government policies and public opinion.”

The relative thickness of the causal links indicate differing degrees to which the causal factors are responsible. Often the hardest part of causal explanation is not ruling factors in or out, but estimating the extent to which they contribute to the outcomes of interest.

Note also the link suggesting that a government-controlled media might be responsible, in part, for the conservatism of the public. Dal Santos doesn’t explicitly address this possibility but does note that certain attitudes have remained largely unchanged since 1996. This lack of change might be taken to suggest that the media is not influencing public conservatism. However it might also be the dog that isn’t barking. One of the more difficult aspects of identifying and assessing causal relationships is thinking counterfactually. If the media had been free and open, perhaps the Russian public would have become much less conservative. The government-controlled media may have been effective in counteracting that trend.

The graphics above are examples of what I’ve started calling causal storyboards. (Surprisingly, at time of writing this phrase turns up zero results on a Google search.) Such diagrams represent webs of events and states and their causal dependencies – crudely, “what caused what.”

For aficionados, causal storyboards are not causal loop diagrams or cognitive maps or system models, all of which represent variables and their causal relationships.  Causal loop diagrams and their kin describe general causal structure which might govern many different causal histories depending on initial conditions and exogenous inputs.  A causal storyboard depicts a particular (actual or possible) causal history – the “chain” of states and events.  It is an aid for somebody who is trying to understand and reason about a complex situation, not a precursor to quantitative model.

Our emerging causal storyboard surely does not yet capture the full causal history behind Putin’s popularity. For example it does not incorporate any additional factors, such as his reputed charisma. Nor does it trace the causal pathways very far back. To fully understand Putin’s popularity, we need to know why (not merely that) the Russian public is so conservative.

The causal history may become very complex. In his 2002 book Friendly Fire, Scott Snook attempts to undercover all the antecedents of a tragic incident in 1994 when two US fighter jets shot down two US Army helicopters. There were dozens of factors, intricately interconnected. To help us appreciate and understand this complexity, Snook produced a compact causal storyboard:

148914-151707.png

To fully explain is to delineate causal history as comprehensively and accurately as possible. However, full explanations in this sense are often not available. Even when they are, they may be too complex and detailed. We often need to zero in on some aspect of the causal situation which is particularly unusual, salient, or important.

There is thus a derivative or simplified notion of explanation in which we highlight some particular causal factor, or small number of factors, as “the” cause. The Challenger explosion was caused by O-ring leaks. The cause of Tony Abbott’s fall was his low polling figures.

As Runde and de Rond point out, explanation in this sense is a pragmatic business. The appropriate choice of cause depends on what is being explained, to whom, by who, and to what purpose.

In an insightful discussion of Scott Snook’s work, Gary Klein suggests that we should focus on two dimensions: a causal factor’s impact, and the ease with which that factor might have been negated, or could be negated in future. He uses the term “causal landscape” for a causal storyboard analysed using these factors. He says: “The causal landscape is a hybrid explanatory form that attempts to get the best of both worlds. It portrays the complex range and interconnection of causes and identifies a few of the most important causes. Without reducing some of the complexity we’d be confused about how to act.”

This all suggests that causes and explanations are not always the same thing. It can make sense to say that an event is caused by some factor, but not fully explained by that factor. O-ring failure caused the Challenger explosion, but only partially explains it.

More broadly, it suggests a certain kind of anti-realism about causes. The world and all its causal complexity may be objectively real, but causes – what we focus on when providing brief explanations – are in significant measure up to us. Causes are negotiated as much as they are discovered.

What does this imply for how we should evaluate succinct causal explanations such as Dal Santo’s? Two recommendations come to mind.

First, a proposed cause might be ill-chosen because it has been selected from underdeveloped causal history. To determine whether we should go along, we should try to understand the full causal context – a causal storyboard may be useful for this – and why the proposed factor has been selected as the cause.

Second, we should be aware that causal explanation can itself be a political act. Smoking-related lung cancer might be said to be caused by tobacco companies, by cigarette smoke, or by smoker’s free choices, depending on who is doing the explaining, to whom, and why. Causal explanation seems like the uncovering of facts, but it may equally be the revealing of agendas.

Read Full Post »

Spotted at the Creation Museum:

humans_dinosaurs.jpg

Q: Are human bones found with dinosaur fossils?

A: None have been discovered yet.  However, if human bones aren’t found with dinosaur bones, it simply means they weren’t buried together.  Humans have come in contact with lots of animals, like crocodiles and coelecanths, but they aren’t buried with humans.

The obvious thing to say about this is that it is flagrant “confirmation bias” – seeking or treating evidence in such a way as to confirm one’s cherished beliefs rather than to evaluate or test them.

From an argument analysis perspective, though, it is a nice example of what, technically, we’d call an “inference rebuttal” – an objection to an primary objection which targets not any of the stated premises of the primary objection but rather the inference from the primary objection to the falsity of the main contention.

That’s quite a mouthful, but the basic idea is simple enough, and can be easily illustrated.

Doing so will help explain one of the most distinctive – but subtle – features of the Rationale software.

On the face of it, the fact that human bones have not been discovered with dinosaur fosils is an objection to the standard Creationist story, which includes the idea that humans and dinosaurs once both roamed the earth at the same time.

humans_dinosaurs_basic1.jpg

The premise of the objection is a blunt fact, and so the Creationist has to accept it:

humans_dinosaurs_basic3.jpg

However the Creationist still wants to defuse the objection, and can do it by arguing that the premise, though true, doesn’t show that the contention is false.

To represent this kind of move, Rationale allows a lower-level objection to be connected to the primary objection itself rather than to any of its premises.  Graphically, the lower-level objection points to the word “opposes”:

humans_dinosaurs_basic6.jpg

Evaluating this argument as a Creationist presumably would, the objection has been defused:

humans_dinosaurs_basic7.jpg

There is however another way to read the Creationist’s argument.  This way of framing things probably better reflects the Creationist’s underlying mindset.   From this perspective, creationist “science” combined with the basic facts imply an interesting “discovery”: those humans who did (supposedly) coexist with dinosaurs never buried themselves with said dinosaurs:

humans_dinosaurs_basic8.jpg

Read Full Post »

A colleague wrote:

In both your new Rationale materials and your old Reason! lessons you distinguish between reasons and explanations. For instance, you point out that the word “because” is sometimes a reason indicator, other times an explanation indicator. I was wondering why, pedagogically, you make a point of distinguishing between arguments and explanations. You seem to want the students to focus primarily on the former and not the latter, but I anticipate that my students will find this confusing. I was wondering why, as a pedagogical matter, you point out this distinction to your students, and how you explain it to them.

My reply, such as it was:

I think that in the larger world of critical thinking, students should understand evidential reasons, and what makes them strong, and explanatory reasons, and when explanations are good ones. However when we are focusing on the former, the latter tends to muddy the waters.

Deanna Kuhn, in her classic The Skills of Argument, provides weighty evidence that a very large proportion of people have a seriously deficient grasp of the basic skills of reasoning and argument, and she diagnoses the problem as due in part to a poor grip on what a (evidential) reason is, i.e., what evidence consists in. Part of understanding this notion of an evidential relationship is understanding that sometimes, when we say things like the reason for X is Y, we’re not using Y to convince somebody that X is true, i.e., we’re using it to explain X rather than as evidence for X, which may already be uncontroversial.

There is now a considerable psychological literature on (evidential) reasons versus explanations. My take on this, which I think is broadly consistent with the literature, is that (causal) explanations come first. Our three-year-old seems to be getting a grasp of causal relations and answering “why” questions in terms of causal narratives, but she’s a long way from understanding the notion of evidence. The latter is a more sophisticated intellectual operation, one which only with considerable maturity is separated from explanations. A problem we face as university teachers is that our students haven’t fully made this transition; they are still foggy about the difference, which means they are foggy about the notion of evidence, which means they haven’t yet mastered the most fundamental concept in reasoning and argumentation.

You’ll notice that in Rationale, the second or “Reasoning” mode uses “because/but” language. This is ambiguous between explanations and evidential reasons. In the designing the software we were quite deliberately “allowing” that ambiguity, since it helps people who aren’t yet clear on the distinction become comfortable with structured reasoning on their own terms. As they progress to proper use of “Analysis” mode, with its language more weighted to the evidential (“supports”), they should be becoming more clear about what an evidential relationship consists in, not by being lectured on the topic (though that might sometimes help) but by dealing with examples in a scaffolded way.

By the way, Deanna Kuhn has an interesting piece on peoples’ ability to identify causal relationships in the current edition of Scientific American Mind.

Read Full Post »