Anyone familiar with this blog knows that it frequently talks about argument mapping. This is because, as an applied epistemologist, I’m interested in how we know things. Often, knowledge is a matter of arguments and evidence. However, argumentation can get very complicated. Argument mapping helps our minds cope with that complexity by providing (relatively) simple diagrams.
Often what we are seeking knowledge about is the way the world works, i.e. its causal structure. This too can be very complex, and so its an obvious idea that “causal mapping” – diagramming causal structure – might help in much the same way as argument mapping. And indeed various kinds of causal diagrams are already widely used for this reason.
What follows is a reflection on explanation, causation, and causal diagramming. It uses as a springboard a recent post on blog of the Lowy Institute which offered a causal explanation of the popularity of Russian president Putin. It also introduces what appears to be a new term – “causal storyboard” – for a particular kind of causal map.
In a recent blog post with the ambitious title “Putin’s Popularity Explained,” Matthew Dal Santo argues that Putin’s popularity is not, as many think, due to brainwashing by Russia’s state-controlled media, but to the alignment between Putin’s conservative policies and the conservative yearnings of the Russian public.
Dal Santo dismisses the brainwashing hypothesis on very thin grounds, offering us only “Tellingly, only 34% of Russians say they trust the media.” However professed trust is only weakly related to actual trust. Australians in surveys almost universally claim to distrust car salesmen, but still place a lot of trust in them when buying a car.
In fact, Dal Santo’s case against the brainwashing account seems to be less a matter of direct evidence than “either or” reasoning: Putin’s popularity is explained by the conservatism of the public, so it is not explained by brainwashing.
He does not explicitly endorse such a simple model of causal explanation, but he doesn’t reject it either, and it seems to capture the tenor of the post.
The post does contain a flurry of interesting numbers, quotes and speculations, and these can distract us from difficult questions of explanatory adequacy.
The causal story Dal Santo rejects might be diagrammed like this:
The dashed lines indicate the parts of the story he thinks are not true, or at least exaggerated. Instead, he prefers something like:
However the true causal story might look more like this:
Here Putin’s popularity is partly the result of brainwashing by a government-controlled media, and partly due to “the coincidence of government policies and public opinion.”
The relative thickness of the causal links indicate differing degrees to which the causal factors are responsible. Often the hardest part of causal explanation is not ruling factors in or out, but estimating the extent to which they contribute to the outcomes of interest.
Note also the link suggesting that a government-controlled media might be responsible, in part, for the conservatism of the public. Dal Santos doesn’t explicitly address this possibility but does note that certain attitudes have remained largely unchanged since 1996. This lack of change might be taken to suggest that the media is not influencing public conservatism. However it might also be the dog that isn’t barking. One of the more difficult aspects of identifying and assessing causal relationships is thinking counterfactually. If the media had been free and open, perhaps the Russian public would have become much less conservative. The government-controlled media may have been effective in counteracting that trend.
The graphics above are examples of what I’ve started calling causal storyboards. (Surprisingly, at time of writing this phrase turns up zero results on a Google search.) Such diagrams represent webs of events and states and their causal dependencies – crudely, “what caused what.”
For aficionados, causal storyboards are not causal loop diagrams or cognitive maps or system models, all of which represent variables and their causal relationships. Causal loop diagrams and their kin describe general causal structure which might govern many different causal histories depending on initial conditions and exogenous inputs. A causal storyboard depicts a particular (actual or possible) causal history – the “chain” of states and events. It is an aid for somebody who is trying to understand and reason about a complex situation, not a precursor to quantitative model.
Our emerging causal storyboard surely does not yet capture the full causal history behind Putin’s popularity. For example it does not incorporate any additional factors, such as his reputed charisma. Nor does it trace the causal pathways very far back. To fully understand Putin’s popularity, we need to know why (not merely that) the Russian public is so conservative.
The causal history may become very complex. In his 2002 book Friendly Fire, Scott Snook attempts to undercover all the antecedents of a tragic incident in 1994 when two US fighter jets shot down two US Army helicopters. There were dozens of factors, intricately interconnected. To help us appreciate and understand this complexity, Snook produced a compact causal storyboard:
To fully explain is to delineate causal history as comprehensively and accurately as possible. However, full explanations in this sense are often not available. Even when they are, they may be too complex and detailed. We often need to zero in on some aspect of the causal situation which is particularly unusual, salient, or important.
There is thus a derivative or simplified notion of explanation in which we highlight some particular causal factor, or small number of factors, as “the” cause. The Challenger explosion was caused by O-ring leaks. The cause of Tony Abbott’s fall was his low polling figures.
As Runde and de Rond point out, explanation in this sense is a pragmatic business. The appropriate choice of cause depends on what is being explained, to whom, by who, and to what purpose.
In an insightful discussion of Scott Snook’s work, Gary Klein suggests that we should focus on two dimensions: a causal factor’s impact, and the ease with which that factor might have been negated, or could be negated in future. He uses the term “causal landscape” for a causal storyboard analysed using these factors. He says: “The causal landscape is a hybrid explanatory form that attempts to get the best of both worlds. It portrays the complex range and interconnection of causes and identifies a few of the most important causes. Without reducing some of the complexity we’d be confused about how to act.”
This all suggests that causes and explanations are not always the same thing. It can make sense to say that an event is caused by some factor, but not fully explained by that factor. O-ring failure caused the Challenger explosion, but only partially explains it.
More broadly, it suggests a certain kind of anti-realism about causes. The world and all its causal complexity may be objectively real, but causes – what we focus on when providing brief explanations – are in significant measure up to us. Causes are negotiated as much as they are discovered.
What does this imply for how we should evaluate succinct causal explanations such as Dal Santo’s? Two recommendations come to mind.
First, a proposed cause might be ill-chosen because it has been selected from underdeveloped causal history. To determine whether we should go along, we should try to understand the full causal context – a causal storyboard may be useful for this – and why the proposed factor has been selected as the cause.
Second, we should be aware that causal explanation can itself be a political act. Smoking-related lung cancer might be said to be caused by tobacco companies, by cigarette smoke, or by smoker’s free choices, depending on who is doing the explaining, to whom, and why. Causal explanation seems like the uncovering of facts, but it may equally be the revealing of agendas.
There is an additional link that could be considered in the relationship between the media and the population: the media and public are connected by a self-reinforcing loop, i.e. the media responds to the tone and direction of the reporting that is desired by the public. This doesn’t suggest that the public asks to be brainwashed but it does imply that the public may be more receptive to a certain slant in reporting.
The term ‘brainwashing’ seems one directional, involving indoctrination and an involuntary change in beliefs. Possibly, the complex relationship between the political realm and the population is – more often than not – based on reciprocity which is not captured by terms such as brainwashing.
“Conservatism” is a similarly loaded term, as it seems to imply a desire to return to past glory. However, conservative and progressive viewpoints might be united in the search for a national identity. No brainwashing required.
Causality may too often be construed to create a compelling narrative. Or, in Leon Trotsky’s words: “The end may justify the means as long as there is something that justifies the end.” However, a justification is not a proof that there is a causal link. Or maybe it is a proof, once current decisions are based on the myth, because then the myth becomes a cause.
“Brainwashing” and “conservatism” suggest that there is a sequence and a purpose. What if they are both justifications?
Here is a very nice causal storyboard from the NYTimes