Feeds:
Posts
Comments

Archive for the ‘Argument Mapping’ Category

John Stuart Mill, in his classic On Liberty, said

three-fourths of the arguments for every disputed opinion consist in dispelling the appearances which favour some opinion different from it.

In this spirit, the second lesson of our free email course, Argument Mapping: Make Your Case Clear and Compelling covers the importance of anticipating and responding to objections to your position, and shows how you can use argument mapping to organise these arguments.

A participant, Chantal, asked: “My question would be about how to produce objections. You are saying we can train for that. Sometimes I try and no interesting idea will arise :( What type of question should I be asking myself to create this other point of view?”

This is an excellent question.  How might one actually go about identifying the strongest objections to one’s own position?

Here are some things you can try.  Of course not all of these may be feasible in your situation.

1. Ask Opponents, or Bystanders

Perhaps the most obvious strategy is just to ask one or more people who strongly disagree with your position.  Such people are likely to be quite happy to help, and are likely to know the best objections.

If you can’t ask somebody who strongly disagrees, you can try asking somebody who is neutral on the topic.  Having no emotional involvement in the matter, they may find it easier than you do to see the problems with your position.

2. Research the Topic

If your position is on an issue that many people may have considered, a little digital sleuthing will often quickly uncover the main arguments on the other side.  For public issues, it should be easy to find op-eds or magazine articles, government reports, and so on.  For more technical or academic issues, scholar.google.com is a great resource.

3. Adapt Objections to Similar Positions

The best arguments against your position might just be adaptations of the best arguments against similar positions.  For example, if you are proposing that there should be a new freeway to the airport, you could look at proposals for freeways elsewhere to quickly get an idea of the kind of objections you are likely to encounter.

4. Use Standard Form Objections

This is a closely related suggestion.  There are many standard types of objections to positions of various kinds.  For example, any position which involves restricting people’s behavior – e.g., a proposal to ban vaping in public places – will encounter objections from based on individual rights and liberties.  (See the rest of Mill’s On Liberty).  If your position is that your group or team should pursue a certain course of action, there will be objections based on risk, particularly worst-case possible outcomes.  And so on.

5. Construct Objections from Interests

Consider what interests are threatened by your position.  Objections might be direct or indirect expressions of those interests.  For example, if your position is that our future energy needs should be met by large nuclear fusion plants, your position will threaten anyone with an interest (commercial, ideological, or any other type) in standard renewable energy industries such as wind or solar.  Those interests will lead to objections such as the impact on jobs in regional areas.

6. Identify and Challenge Assumptions

Any position will depend on a range of assumptions.  You can identify objections by ferreting out all or most of your assumptions and challenging those yourself.  One way to do that is covered in the email course, Lessons 4 and 5.  This is using principles of logic to expose the hidden assumptions in your own arguments supporting your position.

Advertisements

Read Full Post »

Note: this is a draft section of a larger guide.  Comments welcome. 

What is reasoning? Everyone has an intuitive sense, though many would struggle if asked to define it.

A dictionary is usually a good starting point. Merriam-Webster defines reasoning as the process of thinking about something in a logical way in order to form a conclusion or judgment.  

This is OK as far as it goes, but we need to expand and sharpen it quite a bit.  To do this, let’s look at some simple examples.   

Reasoning as a mental activity

Suppose Daniel, someone you know and trust, tells you that a person he knows, Marie, is married.  You now know Marie is in fact married.  Put differently, you are now confident that the claim Marie is married is true.  

Now Daniel asks: does Marie have a husband? Think about that before reading on.

 

If you’re like most people, you would have quickly thought something like Of course Marie has a husband – she’s married! But then you may well have reflected a bit more.  Why would Daniel ask about this, if the answer is so obvious?  What’s the trick?  

The “trick,” of course, is that Marie could be married to a woman, and so have a wife.  Marie might be lesbian and live in a state allowing same-sex marriage.  Or Marie might in fact be a straight man.  Marie’s being married doesn’t prove she has a husband – though she probably does.    

Your thinking here involved considering various ideas – Marie’s being married, Marie’s being a lesbian, Marie’s being a man, and perhaps others – and arriving at a judgement about Marie’s having a husband.  This is reasoning in the Merriam-Webster sense.

In slightly technical terminology, we say that you considered various claims, and also your confidence in the truth of these claims:

Claims Your confidence in their truth
Marie is married. Certain
Marie is a lesbian, married to a woman. Remote possibility.  
Marie is a man, married to a woman. Remote possibility.

and, given the logical relationships among these claims, you arrived at a level of confidence in another claim:

Claim Your confidence in its truth
Marie has a husband. Probable

So in this sense, reasoning is a mental activity; it is:

  • Understanding the logical relationships, if any, among claims; and  
  • Adjusting your confidence in those claims accordingly.  

However this is not the full story.

Reasoning is also the network of claims

Sometimes the word “reasoning” is used to refer not to the mental activity but to the claims themselves.

For example here are the various claims in the above example, with a few extra words (but, and, so) used to indicate logical relationships among them:

Marie reasoning

This is the sense of “reasoning” we are using when we say things like Show me your reasoning! or The reasoning in the article is flawed.  

Reasoning in this sense is like a social network, except claims replace people, and logical relationships replace personal relationships.  Note that just as some people in a social network have no relationship with to each other, some of the claims in the reasoning might not be logically related at all.

Thus, “reasoning” has two different meanings: the mental activity, and the network of claims.  These are of course closely connected; the network is what the mental activity is about.  

Reasoning can be presented in prose, or in a diagram

A network of logically related claims is an abstract thing.  We always need some way to show or present the network, so that our minds can see and follow it.

The standard way to do this is to express the claims in prose (writing or speech).  Examples abound; just look at the opinion page of any newspaper.  Here’s an example of some reasoning expressed in standard prose:

Religion Reasoning.png

It’s not set out rigidly as in the list for the Marie example above, but it is still expressing logically related claims, aimed at getting you to agree that not all religions deserve equal respect.  (Plus, it’s more fun to read.)

Representing reasoning in text is so common, and so normal, that most people hardly even realise that that is what they are doing.  There is however an alternative.  We can represent a network of claims diagrammatically.  

Here’s a diagram for the religion example:

Religion diagram.png

Note that arrows are used instead of the words like but, and and so in the Marie example; and the claims are arranged left to right in a logical order, though one quite different to the order in which they appeared in the original text.

There are lots of different ways to diagram reasoning, depending on what conventions you choose to adopt.  The diagram above is very minimalist.  In this course, we’ll be using a few different types of diagramming.  

To understand somebody’s reasoning, we must model it.

As mentioned, people almost always present reasoning in ordinary prose.  Consequently, we (the readers) have to interpret the prose in order to understand what their reasoning is.  Sometimes this is simple and effortless.  Other times, it is very difficult.  Often it is not at all obvious exactly what the reasoning is, and we have to make our best guess.  

In this course, such “guesses” or interpretations are called models of reasoning.  The diagram above presents a model of the reasoning in the religion text.

Coming up with this model required:

  • Figuring out what claims were being made as part of the reasoning.  For example, the sentence “Jedi knights, for example?” was interpreted as making the claim It is appropriate to ridicule Jedi Knights.
  • Figuring out what logical relationships, if any, these claims are supposed have to each other.  The arrows show these logical relationships.   Notice that nothing in the original text explicitly specified these particular relationships.  They are a matter of interpretation.  

Now, you may not agree with the model expressed in the diagram.  You may think that the author’s reasoning was different.  You might be right; but that would just highlight that you are coming up with your own model of the reasoning, and that coming up with such models is what we always have to do when we read or listen to prose presentations.  

Religion Model.png

An argument map displays a model of reasoning

In practice, a reasoning model is usually displayed diagrammatically.  In the graphic above, the middle representation, the list of claims and their relationships, was included for a couple of reasons.  First, it emphasises the point that reasoning (in one sense) is a set of claims with logical relationships.  Second, it makes visually clear that the same reasoning can be expressed in prose or displayed in a diagram.  

A diagram displaying a model of reasoning in some text can be thought as a kind of map.   A good analogy here is the classic subway map.  The subway map does not show the subway system exactly as it is in reality.  Rather, it portrays certain aspects of the subway system.  Similarly, a diagram of the reasoning expressed in a piece of prose cannot display the reasoning itself; it can only ever show a model of the reasoning.  

A diagram displaying a model of the reasoning expressed in a text is called an argument map.  

Religion Map.png

Recap

We’ve just covered a fair bit of theory, so here is a brief recap.  We defined reasoning as

  1. In one sense, a mental activity, in which we understand the logical relationships among claims, and adjust our confidence in the truth of those claims accordingly.  
  2. In another sense, a network of claims defined by logical relationships.    

Reasoning in the second sense must always be expressed or displayed in some way so that we can see what it is and apply our reasoning capacities to it.  Almost always, reasoning is laid out in prose (speech or writing).  However it is also possible to present reasoning diagrammatically.  A diagram will usually be much better than prose in specifying exactly what the reasoning is.  

Often, it is not easy to identify the reasoning somebody has expressed in prose.  We need to make our best guess as to what that reasoning is; in other words, we need to come up with a model of the reasoning. An argument map is a diagram displaying such a model.

 

 

 

 

 

Read Full Post »

Many people who are new to argument mapping look for a convenient software tool.  Similarly, instructors in critical thinking or informal logic would often like to point their beginner students to a suitable tool for basic argument diagramming.  Ideally that tool would:

  • Be easy to use
  • Be good enough for simple maps
  • Not require installation
  • Not require a specific operating system (Windows, Mac) or browser
  • Not require creating an account on (yet another) website
  • Integrate seamlessly with other tools already being used
  • Be free

After much searching around over the years, my current view is that the tool best meeting these conditions is Microsoft SmartArt.  Nearly everyone already has, or has access to, Word and PowerPoint, and many use them almost everyday.  Most would be surprised to know that they have built in a passable facility for quickly creating simple argument maps.

Of course I’d known about SmartArt, and the possibility of using it for argument mapping, for years.  However for most of that time I’d written it off as being superficially attractive but too limited and frustrating to use.  Recently I’ve changed my tune.  As described below, if you pick the right template and persevere a little bit, you’ll find that SmartArt can do a reasonable job.  It is certainly not ideal, but it may be the best – or rather, the least bad – option currently available.

If you’re not familiar with SmartArt there are introductory videos on Youtube, such as this one.

In what follows I’ll assume that argument maps are (basically; see below) hierarchies or tree structures.  This is convenient because all SmartArt templates are based on hierarchies, represented in editing mode as indented lists.  Some of the SmartArt templates are explicitly classified as “Hierarchy”:

smartart

The templates I find work best are Labeled Hierarchy and Table Hierarchy.  (Tip: don’t bother with the one called Hierachy.)

Here’s a very simple argument map in Labeled Hierarchy format:

simple trump map

This is using the default colour scheme.  A little adjusting using the usual formatting commands results in a map with a more standard colouring:

simple trump map colour

This template has a few drawbacks.  For example, the lines joining the arguments to the contention really should be separate arrows.  Overall, however, it is a pretty classy diagram, and it only takes a minute or two to create.

A neat feature of SmartArt is you can easily change the template while keeping the content the same.  Here’s the same map (minus the labels) in Table Hierarchy format:

tableformat

I’ve included in this image the editing panel at left.  This is only visible when the SmartArt graphic is selected.

As you’d expect, arguments can be nested indefinitely deeply:

tableformat2

The SmartArt algorithm is “space filling” so that no matter how many nodes there are in the argument, the map will fit into whatever space you specify for the SmartArt graphic.  The SmartArt graphic can be resized by simple dragging operations.  If you want to create a really complex map, you can set a large custom size for your Word page or your PowerPoint slide, and add as many boxes as you like.

Any experienced argument mapper reading this will no doubt be thinking something like:

Fine, but what about multi-premise arguments (a.k.a. linked arguments)?

The reality is that hierarchical argument maps are not actually simple tree structures.  The technical name for the kind of structure that argument maps have is hi-tree; see this paper for explanation, and a description of layout algorithms for hi-trees.  SmartArt is based on simple tree structures, and so in principle cannot properly represent reasoning. However the Table Hierarchy format allows a pretty good approximation:

linked

Note the nested linked arguments.

To create the bar which binds premises into a linked argument, you just create an empty node in the hierarchy, and resize and recolour it appropriately.

In theory, there’s no limit to how complex Table Hierarchy maps could get.  In practice, the map above is towards the upper limit for SmartArt argument maps.  The biggest problem you start to encounter is that modifying the map structure starts to become a challenging exercise in hierarchical puzzle-solving.  You can’t just drag and drop objects to add to, or modify, a map; all editing of structure is done in the left hand panel (see graphic above) as operations on an indented list.  This is easy in simple cases but becomes frustrating and time-consuming in more complex maps.

Even in simple cases, using SmartArt to create argument diagrams takes a certain amount of familiarity with SmartArt manipulation and Word formatting more generally.  I haven’t tried to cover these topics in this post.  If you’re an instructor recommending SmartArt as a diagramming tool, you’d probably want to have/get that familiarity yourself and then give your students some guidance.  Guidelines might include:

  • Use the right template (NOT the one called “Hierarchy”)
  • Specify font size across the whole graphic to specific size rather than allowing the algorithm to set font sizes
  • When needed, resize the whole graphic so text fits nicely in boxes

These are very simple operations to carry out when you’re familiar with them.

Also you probably should provide students with semi-prepared graphics for them to use as starting points.

For more advanced users…

For anyone who wants to get more serious about argument mapping, but still wants

  1. To stay within the Word environment; and
  2. Something free

there is the CASE-mapping Word Add-in we created to support a specific variety of argument mapping:

4787835

It comes with an instruction manual, but the support material that is currently publicly available is quite limited, so you need to have a pretty good idea what you’re doing to find this useful.

 

Read Full Post »

The question of who actually wrote the works attributed to “William Shakespeare” is a genuine conundrum.  In fact it may be the greatest “whodunnit” of all time.

Although mainstream scholars tend to haughtily dismiss the issue, there are very serious problems with the hypothesis that the author was William Shakspere of Stratford upon Avon. However all other candidates also have serious problems.  For example Edward de Vere died in 1604, but plays kept appearing for another decade or so.  Hence the conundrum.

Recently however this conundrum may have been resolved.  A small group of scholars (James, Rubinstein, Casson) have been arguing the case for Henry Neville.  A new book, Sir Henry Neville Was Shakespeare, presents an “avalanche” of evidence supporting Neville.  Nothing comparable has been available for any other candidate.

Suppose Rubinstein et al are right.  How can the relevant experts, and interested parties more generally, reach rational consensus on this?  How could the matter be decisively established?  How can the process of collective rational resolution be expedited?

A workshop later this month in Melbourne will address this issue.  The first half will involve traditional presentations and discussion, including Rubinstein making the case for Neville.

The second half will be attempting something quite novel.  We will introduce a kind of website – an “arguwiki” where the arguments and evidence can be laid out, discussed and evaluated not as a debate, in any of the standard formats, but as a collaborative project.  The workshop will be a low-key launch of the Shakespeare Authorship Arguwiki; and later, all going well, it will be opened up to the world at large.  Our grand ambition is that the site, or something like it, may prove instrumental in resolving the greatest whodunnit of all time, and more generally be a model for collective rational resolution of difficult issues.

The workshop is open to any interested persons, but there are only a small number of places left.

Register now.  There is no charge for attending.

 

Read Full Post »

Anyone familiar with this blog knows that it frequently talks about argument mapping.  This is because, as an applied epistemologist, I’m interested in how we know things.  Often, knowledge is a matter of arguments and evidence.  However, argumentation can get very complicated.  Argument mapping helps our minds cope with that complexity by providing (relatively) simple diagrams.

Often what we are seeking knowledge about is the way the world works, i.e. its causal structure.  This too can be very complex, and so its an obvious idea that “causal mapping” – diagramming causal structure – might help in much the same way as argument mapping.  And indeed various kinds of causal diagrams are already widely used for this reason.

What follows is a reflection on explanation, causation, and causal diagramming.  It uses as a springboard a recent post on blog of the Lowy Institute which offered a causal explanation of the popularity of Russian president Putin.  It also introduces what appears to be a new term – “causal storyboard” – for a particular kind of causal map.


 

In a recent blog post with the ambitious title “Putin’s Popularity Explained,” Matthew Dal Santo argues that Putin’s popularity is not, as many think, due to brainwashing by Russia’s state-controlled media, but to the alignment between Putin’s conservative policies and the conservative yearnings of the Russian public.

Dal Santo dismisses the brainwashing hypothesis on very thin grounds, offering us only “Tellingly, only 34% of Russians say they trust the media.” However professed trust is only weakly related to actual trust. Australians in surveys almost universally claim to distrust car salesmen, but still place a lot of trust in them when buying a car.

In fact, Dal Santo’s case against the brainwashing account seems to be less a matter of direct evidence than “either or” reasoning: Putin’s popularity is explained by the conservatism of the public, so it is not explained by brainwashing.

He does not explicitly endorse such a simple model of causal explanation, but he doesn’t reject it either, and it seems to capture the tenor of the post.

The post does contain a flurry of interesting numbers, quotes and speculations, and these can distract us from difficult questions of explanatory adequacy.

The causal story Dal Santo rejects might be diagrammed like this:

putin1

The dashed lines indicate the parts of the story he thinks are not true, or at least exaggerated. Instead, he prefers something like:

putin2
However the true causal story might look more like this:

putin3.jpg

Here Putin’s popularity is partly the result of brainwashing by a government-controlled media, and partly due to “the coincidence of government policies and public opinion.”

The relative thickness of the causal links indicate differing degrees to which the causal factors are responsible. Often the hardest part of causal explanation is not ruling factors in or out, but estimating the extent to which they contribute to the outcomes of interest.

Note also the link suggesting that a government-controlled media might be responsible, in part, for the conservatism of the public. Dal Santos doesn’t explicitly address this possibility but does note that certain attitudes have remained largely unchanged since 1996. This lack of change might be taken to suggest that the media is not influencing public conservatism. However it might also be the dog that isn’t barking. One of the more difficult aspects of identifying and assessing causal relationships is thinking counterfactually. If the media had been free and open, perhaps the Russian public would have become much less conservative. The government-controlled media may have been effective in counteracting that trend.

The graphics above are examples of what I’ve started calling causal storyboards. (Surprisingly, at time of writing this phrase turns up zero results on a Google search.) Such diagrams represent webs of events and states and their causal dependencies – crudely, “what caused what.”

For aficionados, causal storyboards are not causal loop diagrams or cognitive maps or system models, all of which represent variables and their causal relationships.  Causal loop diagrams and their kin describe general causal structure which might govern many different causal histories depending on initial conditions and exogenous inputs.  A causal storyboard depicts a particular (actual or possible) causal history – the “chain” of states and events.  It is an aid for somebody who is trying to understand and reason about a complex situation, not a precursor to quantitative model.

Our emerging causal storyboard surely does not yet capture the full causal history behind Putin’s popularity. For example it does not incorporate any additional factors, such as his reputed charisma. Nor does it trace the causal pathways very far back. To fully understand Putin’s popularity, we need to know why (not merely that) the Russian public is so conservative.

The causal history may become very complex. In his 2002 book Friendly Fire, Scott Snook attempts to undercover all the antecedents of a tragic incident in 1994 when two US fighter jets shot down two US Army helicopters. There were dozens of factors, intricately interconnected. To help us appreciate and understand this complexity, Snook produced a compact causal storyboard:

148914-151707.png

To fully explain is to delineate causal history as comprehensively and accurately as possible. However, full explanations in this sense are often not available. Even when they are, they may be too complex and detailed. We often need to zero in on some aspect of the causal situation which is particularly unusual, salient, or important.

There is thus a derivative or simplified notion of explanation in which we highlight some particular causal factor, or small number of factors, as “the” cause. The Challenger explosion was caused by O-ring leaks. The cause of Tony Abbott’s fall was his low polling figures.

As Runde and de Rond point out, explanation in this sense is a pragmatic business. The appropriate choice of cause depends on what is being explained, to whom, by who, and to what purpose.

In an insightful discussion of Scott Snook’s work, Gary Klein suggests that we should focus on two dimensions: a causal factor’s impact, and the ease with which that factor might have been negated, or could be negated in future. He uses the term “causal landscape” for a causal storyboard analysed using these factors. He says: “The causal landscape is a hybrid explanatory form that attempts to get the best of both worlds. It portrays the complex range and interconnection of causes and identifies a few of the most important causes. Without reducing some of the complexity we’d be confused about how to act.”

This all suggests that causes and explanations are not always the same thing. It can make sense to say that an event is caused by some factor, but not fully explained by that factor. O-ring failure caused the Challenger explosion, but only partially explains it.

More broadly, it suggests a certain kind of anti-realism about causes. The world and all its causal complexity may be objectively real, but causes – what we focus on when providing brief explanations – are in significant measure up to us. Causes are negotiated as much as they are discovered.

What does this imply for how we should evaluate succinct causal explanations such as Dal Santo’s? Two recommendations come to mind.

First, a proposed cause might be ill-chosen because it has been selected from underdeveloped causal history. To determine whether we should go along, we should try to understand the full causal context – a causal storyboard may be useful for this – and why the proposed factor has been selected as the cause.

Second, we should be aware that causal explanation can itself be a political act. Smoking-related lung cancer might be said to be caused by tobacco companies, by cigarette smoke, or by smoker’s free choices, depending on who is doing the explaining, to whom, and why. Causal explanation seems like the uncovering of facts, but it may equally be the revealing of agendas.

Read Full Post »

A colleague Todd Sears recently wrote:

I thought I’d write to let you know that I used an argument map last night to inform a public conversation about whether to change our school budget voting system from what it is (one meeting and you have to be physically present to vote), to the (of all things!) Australian Ballot system (secret ballot, polls open all day, and absentee ballots available).

So, I went through the articles, editorials, and opinion pieces I could find on the matter and collapsed those into a pretty simple argument, which it is. Simple reasoning boxes get the job done.  Our voters had never really seen this kind of visualization.  It’s nice to be able to see an argument exist in space, and to demonstrate by pointing and framing that a “yea” vote needs to buy into the green points, but also that they need to reconcile the red points, somehow. It had very good response.

Ultimately, the AB motion was defeated by five votes.  Still, it was a good example of a calm, reasonable, and civil dialogue.  A nice change from the typical vitriol and partisan sniping.

Here is his map (click to view full size version):

Bethelmap

When I suggested that readers of this blog might find his account interesting or useful, he added:

Let me clarify what I did because it wasn’t a classic facilitation.

1. I reviewed all of the on-line Vermont-centric AB content I could find in the more reputable news sources, and put a specific emphasis on getting the viewpoints of the more vociferous anti-AB folks in my town so that I could fairly represent them.

2. I created a map from that information and structured it in a way that spread out the lines of reasoning in an easily understandable way. I could have done some further abstraction and restructured things, or made assumptions explicit using the “Advanced” mode, but chose to focus on easily recognized reasoning chains.

3. I sent the map out to the entire school board, the administrators, a couple of politicians, the anti-AB folks and some of the other more politically engaged people in town.

4. The session was moderated by the town Moderator, who set out Robert’s Rules of Order. Then discussion began. In fact, the first ant-AB speaker had my map in his hand and acknowledged the balance and strength of both sides of the argument.

5. I let the session run its course, and then explained what I did and how I did it, and then reviewed the Green and Red lines of the debate, explaining that a vote for or against means that the due diligence has to be done in addressing the points counter to your own position, and I demonstrated how this should be done. Though I was in favor of AB, I maintained objectivity and balance, rather than a position of advocacy one way or another.

Overall the session was very civil, informed, and not one point was made (myriad rhetorical flourishes aside) that was not already on the map. Many variations on similar themes, but nothing that hadn’t been captured.

And followed up with:

BTW, just 30 minutes ago I received an e-mail which said this:

Hi,

I love the map of the issues around Australian Ballot that you sent out. Is there an easy way to make such a map? We are tackling some issues that have our faculty at Randolph Union High School pretty evenly split and I think two such maps would be a powerful way for my colleague and I who are leading this change to communicate. It looks as if it was created in PowerPoint. If you are too busy to elaborate that’s fine too.

Thanks for your leadership on the Australian Ballot issue. I appreciate it.

Read Full Post »

Well-known anti-theist Sam Harris has posted an interesting challenge on his blog.  He writes:

So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000,* and I will publicly recant my view. 

In the previous post on this blog, Seven Habits of Highly Critical Thinkers, habit #3 was Chase Challenges.  If nothing else, Harris’ post is a remarkable illustration of this habit.

The quality of his case is of course quite another matter.

I missed the deadline for submission, and I haven’t read the book, and don’t intend to, though it seems interesting enough. So I will just make a quick observation about the quality of Harris’ argument as formulated.

In a nutshell, simple application of argument mapping techniques quickly and easily show that Harris’ argument, as stated by Harris himself on the challenge blog page, is a gross non-sequitur, requiring, at a minimum, multiple additional premises to bridge the gap between his premises and his conclusions.  In that sense, his argument as stated is easily shown to be seriously flawed.

Here is how Harris presents his argument:

1. You have said that these essays must attack the “central argument” of your book. What do you consider that to be?
Here it is: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

This formulation is short and clear enough that creating a first-pass argument map in Rationale is scarcely more than drag and drop:

Harris2

Now, as explained in the second of the argument mapping tutorials, there are some basic, semi-formal constraints on the adequacy of an argument as presented in an argument map.

First, the “Rabbit Rule” decrees that any significant word or phrase appearing in the contention of an argument must also appear in at least one of the premises of that argument.  Any significant word or phrase appearing in the contention but not appearing in one of the premises has suddenly appeared out of thin air, like the proverbial magician’s rabbit, and so is informally called a rabbit.  Any argument with rabbits is said to commit rabbit violations.

Second, the Rabbit Rule’s sister, the “Holding Hands Rule,” decrees that any significant word or phrase appearing in one of the premises must appear either in the contention, or in another premise.

These rules are aimed at ensuring that the premises and contention of an argument are tightly connected with each other.  The Rabbit Rule tries to ensure that every aspect of what is claimed in the contention is “covered” in the premises.  If the Rabbit Rule is not satisfied, the contention is saying something which hasn’t been even discussed in the premises as stated.  (Not to go into it here, but this is quite different from the sense in which, in an inductive argument, the contention “goes beyond” the premises.) The Holding Hands Rule tries to ensure that any concept appearing in the premises is doing relevant and useful work.

Consider then the basic argument consisting of Contention 1 and the premises beneath it.   It is obvious on casual inspection that much – indeed most – of what appears in Contention 1 does not appear in the premises.  Consider for example the word “purview”, or the phrase “falls within the purview of science”.  These do not appear in the premises as stated. What does appear in Premise 2 is “natural phenomena, fully constrained by the laws of the universe”.  But as would be obvious to any philosopher, there’s a big conceptual difference between these.

What Harris’ argument needs, at a very minimum, is another premise.  My guess is that it is something like “Anything fully constrained by the laws of the universe falls within the purview of science.”   But two points.  First, this suggested premise obviously needs (a) explication, and (b) substantiation.  In other words, Harris would need to argue for it, not assume it. Second, it may not be the Harris’ preferred way of filling gaps (one of them, at least) between his premises and his conclusion.  Maybe he’d come up with a different formulation of the bridging premise.  Maybe he addresses this in his book.

It would be tedious to list and discuss the numerous Rabbit and Holding Hands violations present in the two basic arguments making up Harris’ two-step “proof”.   Suffice to say, that if both Rabbit Rule and Holding Hands Rule violations are called “rabbits” (we also use the term “danglers”), then his argument looks a lot like the famous photo of a rabbit plague in the Australian outback:

australianrabbits

Broadly speaking, fixing these problems would require quite a bit of work:

  • refining the claims he has provided
  • adding suitable additional premises
  • perhaps breaking the overall argument into more steps.

Pointing this out doesn’t prove that his main contentions are false.  (For what little it is worth, I am quite attracted to them.)  Nor does it establish that there is not a solid argument somewhere in the vicinity of what Harris gave us. It doesn’t show that Harris’ case (whatever it is) for a scientific understanding of morality is mistaken.  What it does show is that his own “flagship” succinct presentation of his argument (a) is sloppily formulated, and (b) as stated, clearly doesn’t establish its contentions.   In short, as stated, it fails.  Argument mapping reveals this very quickly.

Perhaps this is why, in part, there is so much argy bargy about Harris’ argument.

Final comment: normally I would not be so picky about how somebody formulated what may be an important argument.  However in this case the author was pleading for criticism.

Read Full Post »

Older Posts »