Feeds:
Posts
Comments

Archive for the ‘Critical Thinking’ Category

A common decision making trap is thinking more data = better decision – and so, to make a better decision, you should go out and get more data.  

Let’s call this the datacentric fallacy.  

Of course there are times when you don’t have enough information, when having more information (of the right kind) would improve the decision, and when having some key piece of information would make all the difference.  

Victims of datacentrism however reflexively embark on an obsessive search for ever more information.  They amass mountains of material in hope that they’ll stumble across some critical piece, or critical mass, that will suddenly make clear what the right choice is.  But they are usually chasing a mirage.  

In their addiction to information, what they’re neglecting is the thinking that makes use of all the information they’re gathering.  

As a general rule, quality of thinking is more important than quantity of data.  Which means that you’ll usually be better rewarded by putting any time or energy you have available for decision making into quality control of your thinking rather than searching for more/better/different information.

Richards Heuer made this point in his classic Psychology of Intelligence Analysis.  Indeed he has a chapter on it, called Do You Really Need More Information? (Answer – often, no.  In fact it may hurt you.) 

A similar theme plays out strongly in Phil Rosenzweig’s The Halo Effect… and the Eight Other Business Delusions That Deceive Managers. Rosenzweig provides a scathing critique of business “classics” such as In Pursuit of Excellence, Good to Great and Built to Last, which purport to tell you the magic ingredients for success.  

He points out how in such books  the authors devote much time and effort to boasting about the enormous amount of research they’ve done, and the vast quantities of data they’ve utilised, as if the sheer weight of this information will somehow put their conclusions beyond question.  

Rosenzweig points out that it doesn’t matter how much data you’ve got if you think about it the wrong way.  And think about it the wrong way they did, all being victims of the “halo effect” (among other problems).  In these cases, they failed to realise that the information they were gathering so diligently had been irretrievably corrupted even before they got to it.  

Another place you can find datacentrism  running rampant is in the BI or “business intelligence” industry.  These are the folks who sell software systems for organising, finding, massaging and displaying data in support of business decision making.   BI people tend to think decisions fall automatically out of data, and so presenting more and more data in ever prettier ways is the path to better decision making.

Stephen Few, in his excellent blog Visual Business Intelligence, has made a number of posts taking the industry to task for this obsession with data at the expense of insightful analysis.  

The latest incidence of datacentrism to come my way is courtesy of the Harvard Business Review.  I’ve been perusing this august journal in pursuit of the received wisdom about decision making in the business world.   In a recent post, I complained that the 2006 HBR article How Do Well-Run Boards Make Decisions? told us nothing very useful about how well-run boards make decisions.  

I was hoping to be more impressed by the 2006 article The Seasoned Executive’s Decision Making Style.  The basic story here is that decision making styles change as you go up the corporate ladder, and if you want to continue climbing that ladder you’d better make sure your style evolves in the right way.  (Hint: become more “flexible.”) 

In a sidebar, the authors make a datacentric dash to establish the irrefutablity of their conclusions:

For this study, we tapped Korn/Ferry International’s database of detailed information on  more than 200,000 predominantly North American executives, managers, and business professionals in a huge array of industries and in companies ranging from the Fortune 100 to startups. We examined educational backgrounds, career histories, and income, as well as standardized behavioral assessment profiles for each individual. We whittled the database down to just over 120,000 individuals currently employed in one of five levels of management from entry level to the top.  We then looked at the profiles of people at those five levels of management. This put us in an excellent position to draw conclusions about the behavioral qualities needed for success at each level and to see how those qualities change from one management level to another.

120,000.  Wow. 

They continue:

These patterns are not flukes. When we computed standard analyses of variance to determine whether these differences occurred by chance, the computer spit out nothing but zeroes, even when the probability numbers were worked out to ten decimal points.  That means that the probability of the patterns occurring by chance is less than one in 10 billion. Our conclusion: The observed patterns come as close to statistical fact (as opposed to inference) as we have ever seen.

This seems too good to be true.   Maybe their thinking is going a bit off track here?  

I ran the passage past a psychologist colleague who happens to be a world leader in statistical reform in the social sciences, Professor Geoff Cumming of Latrobe University.   I asked for his “statistician’s horse sense” concerning these impressive claims.  He replied [quoted here with permission]:

P-value purple prose! I love it!

Several aspects to consider. As you know, a p value is Prob(the observed result, or one even more extreme, will occur|there is no true effect). In other words, the conditional prob of our result (or more extreme), assuming the null hypoth is true.

It’s one of the commonest errors (often made, shamefully, in stats textbooks) to equate that conditional prob with the prob that the effect ‘is due to chance’. The ‘inverse probability fallacy’. The second last sentence is a flamboyant statement of that fallacy. (Because it does not state the essential assumption ‘if the null is true’.)

An extremely low p value, as the purple prose is claiming, often in practice (with the typical small samples used in most research) accompanies a result that is large and, maybe, important. But it no way guarantees it. A tiny, trivial effect can give a tiny p value if our sample is large enough. A ‘sample’ of 120,000 is so large that even the very tiniest real effect will give a tiny p. With such large datasets it’s crazy even to think of calculating a p value. Any difference in the descriptive statistics will be massively statistically significant. (‘statistical fact’)

Whether such differences are large, or important, are two totally different issues, and p values can’t say anything about that. They are matters for informed judgment, not the statistician. Stating, and interpreting, any differences is way more important than p-p-purple prose! 

So their interpretation of their data – at least, its statistical reliability – amounts to a “flamboyant statement” of “one of the commonest errors.” Indeed according to Geoff it was “crazy to even think of” treating their data this way.  

The bulk of their article talks about the kinds of patterns they found, and maybe their main conclusions hold up despite the mauling of the statistics.  Maybe.   Actually I suspect their inferences have even more serious problems than committing the inverse probability fallacy – but that’s a topic for another time.  

In sum, beyond a certain point, the sheer volume of your data or information matters much less than thinking about it soundly and insightfully.  Datacentrism, illustrated here, is a kind of intellectual illness which privileges information gathering – which is generally relatively easy to do – over thinking, which is often much harder.

Advertisements

Read Full Post »

This evening  I was fortunate* to meet Greg Hunt, Federal Shadow Minister for Climate Change, Environment and Water.  I mentioned how in 2003 I had used an opinion piece he had written in an exercise for undergraduate students.  The exercise involved creating a map of his argument.  He was, naturally, curious to see what such a map would look like.

* Update in 2013 – I’m now a bit embarrassed to have written that.  Greg Hunt has turned out to be (if indeed he wasn’t all along) the worst kind of politician, using lies and spin to defend the morally indefensible in craven pursuit of political power.

Background: In the leadup to the (second) Iraq war, one of the hot topics of debate was whether the proposed invasion was legal in international law.  In February 2003, a group of 43 Australian legal heavyweights published Coalition of the Willing? Make that War Criminals, arguing bluntly that the war would be illegal and that its architects (Bush, Howard, Major) would be war criminals.   One of the ringleaders in this piece was Hilary Charlesworth, who had been one of Greg Hunt’s teachers at the University of Melbourne Law School.

Greg Hunt took on the task of responding publicly.  In March 2003 he published Yes, This War is Legal.

At the time I was teaching critical thinking in the Faculty of Arts at the University of Melbourne, using the method that we developed there, which was heavily based on argument mapping and required lots of practice mapping “real world” arguments.  I could think of no topic more timely, contentious and important than the legality of the upcoming war – and conveniently we had 800 word presentations of the arguments on each side.  So it made an ideal exercise in which these “best and brightest” young students could try out their emerging argument mapping skills.

For the record, I found that these students, among the most elite in the Australian educational system, were, for the most part, unable to ascertain the actual structure of the arguments presented on either side, even after having had many weeks of argument mapping training.  They could get a rough sense of the arguments, but discerning the precise logical shape demanded considerably more expertise than they had at that time.  Consequently, they were unable to properly evaluate the arguments; most ended up siding with the position they already favoured at an emotional or ideological level.  This is just an illustration of a quite general phenomenon; on matters of any complexity, the actual arguments are simply not comprehendable by most people.  And this of course is in large part because our standard means of presenting those arguments (e.g., in 800 word written opinion pieces in the newspaper) pose immense interpretative challenges.  The problem is not so much that people are stupid, but that the task given to them is far too difficult.

Anyway, here, in bCisive 2 format, is my own rendition, in argument map, of Greg Hunt’s case:

hunt_war_legal2

[click on image to view full-size version]

I’m not endorsing this argument.  What the map does is lay it out transparently, which lays the foundation for careful critique.  You can see at a glance such basic features as how many lines of argument there are; which points have been supported, and which merely asserted; where key assumptions lurk, waiting to be exposed; and so on.

Read Full Post »

Over the Xmas break various family members were engaging in an interesting conversation whose starting point was the way many people are excessively, indeed sometimes hysterically concerned about the dangers of asbestos fibres from nearby demolitions or renovations.

The background theme was how poorly people understand risks, especially small risks, and how they misplace their anxieties about risks.

I suggested that the emotional energy people put into obsessing about floating asbestos fibres would be better invested doing something about much larger dangers such as, say… global warming.

To put things in a bit of perspective, consider:

“This end-Permian extinction is beginning to look a whole lot like the world we live in right now,” Payne said. The end-Permian extinction (mentioned in an example in the previous post on this blog) was a catastrophe 250M years ago when the great majority of land and marine species were eliminated.

Payne is “assistant professor of geological and environmental sciences at Stanford University… a paleobiologist who joined the Stanford faculty in 2005, studies the Permian-Triassic extinction and the following 4 million years of instability in the global carbon cycle.”

Payne says: “The good news, if there is good news, is that we have not yet released as much carbon into the atmosphere as would be hypothesized for the end-Permian extinction. Whether or not we get there depends largely on future policy decisions and what happens over the next couple of centuries.”

See The Day the Seas Died: What Can the Greatest of All Extinction Events Teach Us About Climate Change?

Read Full Post »

The current issue of Choice Magazine (the Australian “Consumer Reports”) has a report on cheddar cheese.

They had five experts blindly rate 28 cheddar cheeses, ranging from your cloth- or wax-wrapped special deli cheddar at $50+ dollars per kilo down to the supermarket brands, sometimes less than $10 per kilo.

Eyeballing the results table, it seemed that price wasn’t a reliable guide to quality – some good cheeses were quite cheap and vice versa.

In the results table, they listed overall quality (score out of 20) and price per kg. They didn’t offer a “value for money” rating, so I copied the table into Excel and had it compute “value for money” as quality divided by price.

Now that the data was in Excel, we could probe a little further.

Turns out the correlation between quality and price was -.05. In other words, the quality of the cheese you buy, on average, has virtually nothing to do with price. If anything, as you go up in price, it gets worse.

Consequently, the correlation between quality and value for money was abysmal: -.8. In other words, on average, the more you pay, the more you’re getting ripped off.

Some cheeses had long names with lots of fancy-sounding words, such as “Devondale Special Reserve Premium Aged Vintage.” That must be a good cheese, right?

I used Excel to count the characters in a cheese’s name. Running the correlations showed that length of name bears little if any relation to price, quality, or value for money.

Conclusions: buying cheddar cheese is a lottery. If you haven’t tasted the cheeses, and are just trying to guess which ones are good, ignore price and fancy names; these have nothing to do with quality. If you want value for money, go for the cheaper cheese.

In short: when buying cheddar cheese in Australia, it just isn’t true that “you get what you pay for.”

PS – the cheese I’ll buy: South Cape Vintage Black Label. Nearly the top in quality, but only $15 a kilo.

Powered by ScribeFire.

Read Full Post »

Peter Tillers discusses why DNA can never be regarded, on its own, as conclusive evidence of guilt or innocence. 

This post makes me wonder about the possibility of a kind of schematic argument map showing how the argument from say a DNA match to guilt would have to go in some more-or-less general version.  This map would display the numerous inferential steps, assumptions etc. – i.e., the numerous points at which the inference might fail.

John Burns, who at the time was quite senior in the Hong Kong police and had experience in training detectives, proposed this kind of idea in masters dissertation.  He called them “pre-structured argument maps”.  You would have such a map for each typical situation in which a detective might be trying to make the case for guilt, e.g., one for shoplifting.  The pre-structured map would embody (a) a good understanding of the overall structure of the case that would have to be made out, and (b) the accumulated wisdom of experienced detectives as to all the bases that need to covered – e.g., the detective would have to have evidence to rebut the defendant’s claim that he already owned the item. 

Then, rather than piecing together a case (whether in argument map form, or more traditional format) from scratch, the detective would check off the various aspects of the case on the pre-structured map, removing parts which are inapplicable to the particular situation, etc.. Along the way the detective would be learning what a good case looks like, being exposed to the myriad ways in which the case might be defeated by a clever lawyer, etc.

Powered by ScribeFire.

Read Full Post »

On the news tonight there was coverage of protests in Washington against the Iraq war. There was a soundbite of an Iraq veteran saying “You can’t support the troops and oppose the war, because the troops support the war.”

These thoughts flashed through my mind in quick succession:

  1. Argument.
  2. Argument, very concisely expressed.
  3. Bad argument.
  4. Bad, but interesting.

Why interesting? Well, consider what this fellow must be assuming. Put another way, what co-premise would, if true, make this argument strong?

support_troops.jpeg

Presumably something like, “if you support somebody, you have to support what they support”.

This is similar to the technical notion of transitivity: if A supports B and B supports C, then A supports C. Conversely, if A doesn’t support C, then A doesn’t really support B.

So we get:

support_troops-2.jpeg

Now it seems to me that this assumption is obviously wrong as a general principle. For example I can support my child without thereby being obliged to adopt whatever ill-considered attitude they might adopt.

From a “critical thinking” perspective, the argument is really a “fallacy of equivocation” – i.e. an argument that is fallacious because it “equivocates” on a key term, meaning that it uses a key term in different ways in different places.

The term “support” means one thing when you talk about supporting the troops, and another thing when you talk about supporting or opposing the war.

But there is a deeper issue here – the idea that allegiance to a group requires allegiance to the beliefs of that group. Something profound (and presumably of evolutionary origin) in the human psyche makes us tend this way. Many human organisations promote the idea and owe their continued existence to its power. But it is of course a dangerous idea.

Read Full Post »

“Just Some Guy” wrote today:

I recently stumbled on an excellent online article authored by yourself entitled “Teaching Critical Thinking“. I was wondering if you could take a moment of your valuable time to suggest a couple of books on the subject. I would like improve my critical thinking skills so I suppose the focus sought would be adult learner skill(s) acquisition with emphasis on techniques and (lots of) practice. I have been trying to develop said skills on my own (without much success). I would really like to have find a proven program to apply. As you know there is tons of information available online however I am getting lost trying to sort out all the wheat from chaff. Thank you in advance for your consideration.

I used to be a regular academic, and one reason for heading off in a different direction was the experience most academics know all too well, which is that you’ll slave for months on a paper, have it published, and then… nothing happens. It seems you may as well not have bothered. So it is gratifying when some paper you wrote, and which seemed to have vanished without a trace, starts to get picked up, read, and perhaps even appreciated. In the case of the paper mentioned above, in past month I’ve heard that it is the subject of a faculty discussion group at the University of Pittsburgh (where I did my PhD), and read by administrators at a startup university campus in Singapore. Now it seems to have helped Just Some Guy. Maybe it was worth the effort that went into it.

Anyway, regarding JSG’s query, in workshops I used to hand out brief annotated “further reading” list. Here it is:

There are hundreds of books on thinking and how to improve it, ranging from airport junk to turgid academic treatises. Here is a short list of some of the best, focusing on critical thinking. All are accessible, entertaining, and contain many valuable insights. Listed in alphabetical order, so don’t necessarily start at the top.

Cialdini, R. B. (1984). Influence: The Psychology of Persuasion. New York: William Morrow and Co. Classic, eye-opening description of the tricks, ruses and deceptions others use to manipulate us into doing what they want.

Giere, R. N. (1996). Understanding Scientific Reasoning (4th ed.). Fort Worth: Holt, Rinehart and Winston, Inc. Very clear overview of the fundamentals of scientific reasoning. Basic literacy in scientific methodology.

Heuer, R. J. (1999). Psychology of Intelligence Analysis. Center for the Study of Intelligence, CIA. Although intended primarily to assist intelligence analysts, there is a lot of good stuff here, on both the descriptive (how our minds work) and normative (rules for better thinking) sides. Plus, available free online!

Kepner, C. H., & Tregoe, B. B. (1997). The New Rational Manager. Princeton: Princeton Research Press. These are the people who first brought “critical thinking” to the business world and built out of it a multinational consulting firm. Very practical orientation.

Minto, B. (1996). The Minto Pyramid Principle: Logic in Writing, Thinking and Problem Solving.  Minto Books International Limited (www.barbaraminto.com). [Note: this edition supersedes the earlier edition, published by Pearson.]  Barbara Minto was a McKinsey consultant and editor; this book is now the “Bible” in this area for major consulting firms. Some profound truths about good thinking and communication, cast in a way which makes sense for folks in the business community.

Myers, D. G. (2002). Intuition: Its Powers and Perils. New Haven: Yale University Press. “Europe in ten days” tour of the ways intuitive thinking can go wrong, according to serious psychologists. Pretty exhaustive coverage, but most of it will just wash over you.

Paul, R. W., & Elder, L. (2002). Critical Thinking: Tools for Taking Charge of Your Professional and Personal Life. Upper Saddle River, New Jersey: Financial Times Prentice Hall. Paul and Elder are prominent critical thinking instructors. This book packages their insights as practical tools for personal and professional life. Stresses psychological and ethical issues, though often becomes a bit too “pop psychology”.

Piatelli-Palmarini, M. (1994). Inevitable Illusions: How Mistakes of Reason Rule our Minds. New York: Wiley. Very readable introduction to some of the most famous cognitive biases and blindspots. More diagnosis than therapy.

Salmon, M. (1989). Introduction to Logic and Critical Thinking (2nd ed.). San Diego: Harcourt Brace Jovanovich. The best of the standard undergraduate textbooks. A bit dull, but very sound.

Spence, G. (1995). How to Argue and Win Every Time. New York: St. Martin’s Press. Written by a criminal attorney who (according to the dust jacket) never lost a case. If you can look beyond the very “American” style, there is much wisdom here. It is a treatise in the art of rhetoric, but it is principled rhetoric rather than mendacious sophistry.

Whyte, J. (2004) Crimes Against Logic. McGraw-Hill. A short introduction to “fallacies,” i.e., common patterns of bad reasoning. Whyte runs through about a dozen, but there are dozens of others. Witty, fast-moving and brief.

Read Full Post »

« Newer Posts - Older Posts »