Feeds:
Posts
Comments

Archive for the ‘Research’ Category

I’ve had the following abstract accepted for a presentation at a conference in December at the University of Melbourne, Higher Education Research & the Student Learning Experience in Business.

A Pragmatic Definition of Critical Thinking for Business

This presentation will lay out a pragmatic definition of critical thinking.  It doesn’t purport to be the definitive characterization of what critical thinking is. Rather, it is offered as a convenient framework for understanding the nature and scope of critical thinking, which may be useful for purposes such as developing a dedicated subject in critical thinking for business, improving the teaching of critical thinking within existing subjects, or evaluating the effectiveness of a business course in developing critical thinking.

The definition is constructed around five commitments:

    • First, the essence of critical thinking is correct or accurate judgement. That is, to think critically is to think in ways that are conducive to being “more right more often” when making judgements.
    • Second, “being more right more often” can be achieved through the skillful application of general thinking methods or techniques.
    • Third, these techniques range on a spectrum from the simple and easily acquired to technical methods which require special training.
    • Fourth, for all but the simplest of methods, there are degrees of mastery in application of these techniques.
    • Fifth, there are many different kinds of judgements made in business, including decision making, prediction, estimation, (causal) explanation, and attribution of responsibility. For each major type of judgement, there are typical pitfalls, and a range of critical thinking methods which can help people avoid or compensate for those pitfalls.

These commitments enable us to define a kind of three-dimensional chart representing the critical thinking competency of any individual. Along one (categorical) axis is the various kinds of judgements (decision making, etc.). Another axis represents the spectrum from simple through to advanced critical thinking methods. Particular methods can then be placed in appropriate “boxes” in the grid defined by these axes. A person will have a degree of mastery of the methods in each box; this can be represented on a third dimension. A person’s critical thinking competency is thus a distinctive “landscape” formed by the varying levels of mastery.

This characterisation is tailoring, for business, a more general pragmatic approach to understanding critical thinking.  About a year ago I developed this approach in preparation for a workshop in the US on development of a test of critical thinking for intelligence analysts; my role in the workshop was to lay out a general framework for understanding what critical thinking is.   That approach was described in a manuscript Dimensions of Critical Thinking.


I’m also supporting a team from the University of Sydney Business School, who have had the following abstract accepted:

Evaluating critical thinking skill gains in a business subject

Helen Parker, Leanne Piggott, Lyn Carson
University of Sydney Business School
Tim van Gelder
University of Melbourne and Austhink Consulting

Critical thinking (CT) is one of the most valued attributes of business school graduates, and many business school subjects claim to enhance it. These subjects frequently implement pedagogical strategies of various kinds aimed at improving CT skills. Rarely however are these efforts accompanied by any rigorous evaluation of CT skill gains. But without such evaluation, it is difficult to answer questions such as:

    • Are our students’ CT skills in fact improving? By how much?
    • Are those skills improving more than they would have even without our special CT instruction?
    • Are the marginal gains worth the cost?
    • Are our attempts to improve our instruction from semester to semester making any difference?

These kinds of questions are particularly relevant to the University of Sydney Business School, which has an entire subject dedicated to improving CT (BUSS5000 – Critical Thinking in Business), enrolling some 800 students per semester. Consequently, in 2013, the Business School embarked on a large-scale, multi-year evaluation program. The evaluation is based on pre- and post-testing using an independent objective test (the Halpern Critical Thinking Assessment), whose coverage overlaps with the range of critical thinking skills taught in the subject. This presentation will give an overview of the approach it has adopted. It will discuss some of the challenges and pitfalls in the testing process, and how to interpret results. Finally, it will present data and insights from the first semester of full-scale evaluation. The session should be of interest to anyone interested in evaluating CT skills, or more generally in how business school education can enhance CT.

There’s an obvious complementarity between these two topics.

Read Full Post »

A common decision making trap is thinking more data = better decision – and so, to make a better decision, you should go out and get more data.  

Let’s call this the datacentric fallacy.  

Of course there are times when you don’t have enough information, when having more information (of the right kind) would improve the decision, and when having some key piece of information would make all the difference.  

Victims of datacentrism however reflexively embark on an obsessive search for ever more information.  They amass mountains of material in hope that they’ll stumble across some critical piece, or critical mass, that will suddenly make clear what the right choice is.  But they are usually chasing a mirage.  

In their addiction to information, what they’re neglecting is the thinking that makes use of all the information they’re gathering.  

As a general rule, quality of thinking is more important than quantity of data.  Which means that you’ll usually be better rewarded by putting any time or energy you have available for decision making into quality control of your thinking rather than searching for more/better/different information.

Richards Heuer made this point in his classic Psychology of Intelligence Analysis.  Indeed he has a chapter on it, called Do You Really Need More Information? (Answer – often, no.  In fact it may hurt you.) 

A similar theme plays out strongly in Phil Rosenzweig’s The Halo Effect… and the Eight Other Business Delusions That Deceive Managers. Rosenzweig provides a scathing critique of business “classics” such as In Pursuit of Excellence, Good to Great and Built to Last, which purport to tell you the magic ingredients for success.  

He points out how in such books  the authors devote much time and effort to boasting about the enormous amount of research they’ve done, and the vast quantities of data they’ve utilised, as if the sheer weight of this information will somehow put their conclusions beyond question.  

Rosenzweig points out that it doesn’t matter how much data you’ve got if you think about it the wrong way.  And think about it the wrong way they did, all being victims of the “halo effect” (among other problems).  In these cases, they failed to realise that the information they were gathering so diligently had been irretrievably corrupted even before they got to it.  

Another place you can find datacentrism  running rampant is in the BI or “business intelligence” industry.  These are the folks who sell software systems for organising, finding, massaging and displaying data in support of business decision making.   BI people tend to think decisions fall automatically out of data, and so presenting more and more data in ever prettier ways is the path to better decision making.

Stephen Few, in his excellent blog Visual Business Intelligence, has made a number of posts taking the industry to task for this obsession with data at the expense of insightful analysis.  

The latest incidence of datacentrism to come my way is courtesy of the Harvard Business Review.  I’ve been perusing this august journal in pursuit of the received wisdom about decision making in the business world.   In a recent post, I complained that the 2006 HBR article How Do Well-Run Boards Make Decisions? told us nothing very useful about how well-run boards make decisions.  

I was hoping to be more impressed by the 2006 article The Seasoned Executive’s Decision Making Style.  The basic story here is that decision making styles change as you go up the corporate ladder, and if you want to continue climbing that ladder you’d better make sure your style evolves in the right way.  (Hint: become more “flexible.”) 

In a sidebar, the authors make a datacentric dash to establish the irrefutablity of their conclusions:

For this study, we tapped Korn/Ferry International’s database of detailed information on  more than 200,000 predominantly North American executives, managers, and business professionals in a huge array of industries and in companies ranging from the Fortune 100 to startups. We examined educational backgrounds, career histories, and income, as well as standardized behavioral assessment profiles for each individual. We whittled the database down to just over 120,000 individuals currently employed in one of five levels of management from entry level to the top.  We then looked at the profiles of people at those five levels of management. This put us in an excellent position to draw conclusions about the behavioral qualities needed for success at each level and to see how those qualities change from one management level to another.

120,000.  Wow. 

They continue:

These patterns are not flukes. When we computed standard analyses of variance to determine whether these differences occurred by chance, the computer spit out nothing but zeroes, even when the probability numbers were worked out to ten decimal points.  That means that the probability of the patterns occurring by chance is less than one in 10 billion. Our conclusion: The observed patterns come as close to statistical fact (as opposed to inference) as we have ever seen.

This seems too good to be true.   Maybe their thinking is going a bit off track here?  

I ran the passage past a psychologist colleague who happens to be a world leader in statistical reform in the social sciences, Professor Geoff Cumming of Latrobe University.   I asked for his “statistician’s horse sense” concerning these impressive claims.  He replied [quoted here with permission]:

P-value purple prose! I love it!

Several aspects to consider. As you know, a p value is Prob(the observed result, or one even more extreme, will occur|there is no true effect). In other words, the conditional prob of our result (or more extreme), assuming the null hypoth is true.

It’s one of the commonest errors (often made, shamefully, in stats textbooks) to equate that conditional prob with the prob that the effect ‘is due to chance’. The ‘inverse probability fallacy’. The second last sentence is a flamboyant statement of that fallacy. (Because it does not state the essential assumption ‘if the null is true’.)

An extremely low p value, as the purple prose is claiming, often in practice (with the typical small samples used in most research) accompanies a result that is large and, maybe, important. But it no way guarantees it. A tiny, trivial effect can give a tiny p value if our sample is large enough. A ‘sample’ of 120,000 is so large that even the very tiniest real effect will give a tiny p. With such large datasets it’s crazy even to think of calculating a p value. Any difference in the descriptive statistics will be massively statistically significant. (‘statistical fact’)

Whether such differences are large, or important, are two totally different issues, and p values can’t say anything about that. They are matters for informed judgment, not the statistician. Stating, and interpreting, any differences is way more important than p-p-purple prose! 

So their interpretation of their data – at least, its statistical reliability – amounts to a “flamboyant statement” of “one of the commonest errors.” Indeed according to Geoff it was “crazy to even think of” treating their data this way.  

The bulk of their article talks about the kinds of patterns they found, and maybe their main conclusions hold up despite the mauling of the statistics.  Maybe.   Actually I suspect their inferences have even more serious problems than committing the inverse probability fallacy – but that’s a topic for another time.  

In sum, beyond a certain point, the sheer volume of your data or information matters much less than thinking about it soundly and insightfully.  Datacentrism, illustrated here, is a kind of intellectual illness which privileges information gathering – which is generally relatively easy to do – over thinking, which is often much harder.

Read Full Post »

 Now available – the final version of my paper prepared in connection with the conference Graphic and Visual Representations of Evidence and Inference in Legal Settings in January this year.  The paper is now called The Rationale for Rationale™.

Read Full Post »

 A new Rationale user working on a PhD thesis emailed the following:

I finished my comps in March and have been working to nail down my dissertation topic since. I have too many interests and little discipline so it’s been daunting. Notably, I sat down last week with rationale and decided to map out what I was thinking and feeling. I used the reasoning tools to nail down my main argument, the assertions I am inclined to make in support of that argument, and then what I know (or believe) supports those. Trying to not get bogged down, I next skipped to basis statements that helped me sort out which of these things I know are supported in the literature, which I need to do original logic on, which I need to test using a game model, and which I need to support using case studies. And finally – after months of circling, I went to the text panel and got the skeleton of a précis. Spent three more days cleaning up and thinking, and then as of this morning I sent those 4 pages off to a prospective adviser to start a conversation.

It might also be useful later in the process, articulating and evaluating what you take to be your core arguments.

If you’re writing a thesis, or some other elaborate piece of argumentative prose, then its a good idea to try mapping your arguments just to test whether you really know what they are.

If you actually have any substantial arguments, and if you are truly clear about what they are, mapping them should be a trivial exercise – just whacking claims into boxes and putting those boxes where they belong in the logical hierarchy.

However, it almost never is a trivial exercise.  We are, in fact, often quite deluded about the extent to which we really understand our own arguments.   Of course often we’re aware that we’re not fully on top of the arguments.  The more interesting point here is that, most of the time, when we think we know exactly what they are, we’re laboring under a kind of illusion of clarity.  There’s nothing like the demand to lay out the arguments in a map (well, a map observing the core principles of good argument mapping) to puncture the illusion.

The amount of effort you find you need to put in to get a tolerably good map of your arguments is a measure of the lack of clarity you have about those arguments.

(This assumes that you’re using a tool, like Rationale, which reduces to almost nothing the mechanics of producing an argument map diagram.)

Read Full Post »

At a number of universities around the world, people are now setting up studies to help determine the extent to which argument maps, or Rationale use, can help build skills or improve performance on difficult tasks.

One such person asked in an email:  “What is the average time for an adult learner to complete the building of an argument in Rationale?”

Unfortunately there is no simple answer to that question.  The time needed can vary enormously, depending on factors such as:

  • how complex is the argument?
  • are they coming up with their own argument (easier) or trying to map out an argument from a text written by somebody else (generally quite hard)?
  • how exhaustive and correct should the maps be?  Should all principles of argument mapping properly observed?  Or is “anything which looks good enough to them” acceptable?

On one hand, a simple argument of their own, done sloppily, should take only a minute or two.  But mapping an argument from, say, a journal article or opinion piece, and doing it properly, can take hours, even for somebody highly skilled.  And at the other extreme, mapping a truly complex body of argumentation can take months, even years.  Austhink is just completing an argument mapping assignment for a government department, looking at all the arguments surrounding a controversial major equipment purchase.  This has taken about four months with two people working on it.  Consider also the “mother of all maps,” Robert Horn’s Can Computers Think? series of maps, which took a team of people a number of years.

In practice, most non-specialists have only a finite appetite for mapping arguments, and limited capacity to apprehend deficiencies in their own work, and so are unlikely to spend more than, say, 1/2 an hour on any given task.  So, returning to the original question, here’s a very rough guess:

  • Simple tasks (e.g., coming up with one, single-reason argument for a claim) – allow a few minutes per task
  • Complex tasks – allow around half an hour, or more

Read Full Post »

As mentioned a few posts ago, I’ve been resisting the temptation to write in this space, due to an academic paper demanding completion.

The paper is about Rationale, for a legal journal; here is the “table of contents”:

Rationale: A Generic Argument Mapping Tool
Introduction
1. Rationale Overview
2. Making Humans Smarter.
2.1 Educational
2.2 Professional
3. Why Does It Work?
3.1 Usability
3.2 Complementarity
3.3 Semi-Formality
4. Conclusion: Rationale and Legal Reasoning

Here is the current draft of a section. Comments welcome.

3. Why does it work?

Assuming that Rationale really does (or at least can) make humans smarter, it is interesting to ask why this is so. What is it about the Rationale, or argument mapping software more generally, which helps us reason more effectively? Because argument mapping is a new phenomenon, there has so far been little serious research in this area. We are only gradually developing an understanding of the relevant issues. At least three main themes are emerging: usability, complementarity, and semi-formality. These are not three independent explanations; they are better thought of as overlapping “takes” on how or why Rationale achieves its intended effect.

3.1 Usability

In a nutshell, the first claim is this: a tool like Rationale improves reasoning because it is highly usable for reasoning activities, or at least more usable than relevant alternatives. This is not simply the assertion that the software is “user friendly,” which usually means that the software is attractive to, and easily used by, naive users. Rather, the technical notion of usability concerns the degree to which a tool or system enables standard users to conduct their activities or achieve their goals effectively and efficiently, and perhaps also with some measure of pleasure or satisfaction. A usable tool may not be very user-friendly to naive users. A good analogy here is windsurfing. There are basically two kinds of windsurfing boards. Beginner boards are large, stable, and float with a person standing on it even when not moving. They are very “friendly” to windsurfing naifs. Regular or advanced boards are smaller, less stable and more nimble; they are very difficult for beginners to use but support a far better windsurfing experience for those who are competent. The assertion that Rationale is highly usable means primarily that, like the advanced windsurfing board, it enables people who are competent in the use of the tool to engage in reasoning activities more effectively, efficiently and satisfyingly.

The claim that Rationale improves reasoning because it is highly usable plays out differently in each of the two main contexts of use. In the educational context, Rationale’s usability for reasoning helps improve reasoning skills by enabling a student to do more practice, and practice of a better kind, than they can do using traditional techniques. Just as you can become a much more skilful windsurfer through lots of practice on an advanced board, so you can a better reasoner through lots of use of a tool like Rationale, even if it takes some training to get “up to speed.” There is an important disanalogy however. In windsurfing, the skill you acquire can only be deployed on a suitable board; whereas in reasoning, the skills you acquire a more generic and transferable, and can be deployed even without the software tool which enabled the development of those skills.

In the professional context, the claim that Rationale improves reasoning performance because it is highly usable is a tautology, since to be usable is, by definition, to enable better performance. However invoking the notion of usability still helps because it points in a certain direction. Exploring the issue from a usability perspective can help us better understand why and how a software package like Rationale improves performance.

When we claim that a tool like Rationale is highly usable, we are not measuring Rationale’s usability on some independent, objective scale. Rather, we are saying that it is significantly more usable than relevant alternatives. Thus to make sense of the claim, we need some understanding of what the relevant alternatives are. What tool or tools do we standardly use to help us engage in informal reasoning or argumentation? If engaged in a complex debate over, say, carbon trading, or the war in Iraq, what do we use to help us organise and evaluate the arguments?

The answer is that, overwhelmingly, we use prose. By this I mean that we articulate our arguments using sentences organised sequentially on a page or pages, and using various strategies such as indicator words (“therefore” etc.), paragraphs, indentation and dot points to help illuminate how the parts of the arguments hang together. We use prose as a tool to help us develop arguments, as when we figure out what our argument is through producing and editing drafts; and to present arguments to others and even ourselves.

Argumentative prose can be considered a very generic or abstract kind of tool. Our use of prose is supported in turn by various more concrete technologies. For example we can use pen and paper, or we can use a word processor running on standard personal computing hardware. The support technologies have changed substantially over time but the way we use those technologies to support reasoning and argumentation has remained essentially constant from the time of the ancient Greek philosophers through to the present day, which is why works of Plato and Aristotle can still be part of the standard canon studied even by undergraduate philosophy students.

A tool can be very widely used even if it is not particularly usable. There are innumerable examples; thus, not so long ago, fountain pens were widely used for writing on paper, but they are less usable than ball-point pens, even if the result is sometimes more aesthetically pleasing. In the case of prose, people are generally so accustomed to using this tool, so ignorant of any alternative, and so blind even to the idea that prose is the tool they are using, that they fail to realise what its usability problems are or even that it has usability problems. Any deficiencies in a person’s reasoning, or presentation of reasoning, are attributed to deficiencies in their education or their intellectual capacities rather than being traced, at least in part, to inadequacies in the tools.

An “argument processor” such as Rationale, based not around prose but around custom diagrams, is an alternative to prose. Importantly, it is an alternative that was developed with the deliberate goal of being more usable than prose, so it would not be that suprising if it turned out in fact to be more usable. The interesting part is how it manages to be so.

First, any contemporary software tool is able to take advantage of the wisdom accumulated in decades of research into how interactive tools should be designed so as to best support our activities, particularly our cognitive activities. The lessons learned from this research are increasingly encapsulated in authoritative sources (textbooks, etc.) and embedded into the tools and conventions used to develop contemporary applications. They concern diverse issues in the design of a tool meant to maximize performance and an experience of “flow”: when and how to use (or avoid) “modes” or dialog boxes, how to use size and colour; how to align behavior with user’s mental models; etc. Rationale is in the fortunate position of being able to exploit this accumulated general wisdom and apply it, almost “off the shelf,” to the case of a tool to support reasoning activities.

Second, a tool like Rationale is adapted or tuned to the unique demands of reasoning and argumentation activities. Prose, and supporting technologies such as word-processors, are generic; they can be used for reasoning but are not designed specifically for such use. A purpose-built tool provides distinctive ways of representing reasoning structures and “affords” appropriate kinds of operations on those representations. Its design is constrained by the nature of reasoning activities, and at the same time not distorted or diluted by the need to support activities other than reasoning.
Third, an argument mapping package can take advantage of a wider range of basic representational resources. Consider some typical argumentative prose – for example, an opinion piece in a major newspaper. What means does the author use to help convey to you, the reader, how the various key claims hang together as an argument? Most obviously, the author might use explicit verbal indicators – phrases such as “This is because…” and “Hence…”. Other tactics include word and sentence ordering; paragraph breaks; and subtle cues based in the meaning of terms and the context in which the piece is written. That’s it. Upon reflection this is a remarkably meagre set; it almost wilfully ignores a range of other resources available not just to the contemporary computer user but even to a child with just a pad and a bag of coloured pencils. These resources include:
symbolic argument structure markers, such as the philosophers’ standard “P1, P2, … C”
colour; for example, using colour to represent the “polarity” of one proposition in relation to another, i.e., whether it supporting or opposing;
shape,
lines or arrows
position in space
If your goal is to produce displays of reasoning which maximise comprehension, manipulation, evaluation and communication, why wouldn’t you take advantage of such cheap but powerful visual aids? If we can break the shackles of convention and habit, we are free to exploit any resources which in practice can aid the process of reasoning. Argument mapping software helps itself to these resources, thereby gaining an “unfair” advantage over traditional argumentative prose.

Current packages such as Rationale are just stages in an ongoing process of redesign, the ultimate goal of which is a tool so usable that it becomes like an invisible extension of our cognitive apparatus. Just as we are not aware of our own brains when we are thinking, but are aware of what our brains are helping us think about, so an ideal argument mapping package would be like the blind man’s cane, something “through which” our minds engage in complex deliberation, conscious only of the reasoning itself, not of any issue or difficulty in dealing with the tool. No package has reached that goal yet, and perhaps no package ever will; but the best of the current generation are at least, by design, getting significantly closer to that goal.

Read Full Post »

Anyone likely to be in Melbourne on Feb 19 is welcome to join the Victorian Skeptics for an informal talk:

tim_van_gelder_poster.jpg

The abstract is

Academic philosophers, like most professionals, think they’re pretty good at what they do. I’ll present some general reasons for scepticism on this score. Then I’ll focus on one particular respect in which philosophers think they’re pretty good – teaching critical thinking. I’ll show detailed empirical evidence on critical thinking skills gains, which suggests that if you want students to get better at critical thinking, you should teach them critical thinking (not philosophy) and if you want them to get even better, you should teach them using argument mapping.

The talk is a blending of two things. First, a talk I gave about five years ago to various philosophy departments in Australia, in which I challenged the audience to come up with positive reasons to think that they are, in their core professional activities, any better than investment professionals such as stockbrokers, fund managers and the ilk, which have been shown by mountains of evidence to be useless at choosing superior investments, even if they are quite good at skimming vast sums of money from the savings of others. In response, philosophers generally came up with, at best, the kind of lame arguments they’d instantly ridicule others for making; the main outcome of all this, as far as I could tell, was resentment towards me for even raising the topic, which may partly explain why I haven’t been invited to talk at any philosophy department ever since.

The second thing is the work of a Masters student at the University of Melbourne, Claudia Alvarez, who has written on whether studying philosophy is, as philosophers claim, especially effective in developing critical thinking skills. Claudia did (or at least, carried through to completion) a meta-analysis which gives us the best available fix on whether this claim is true. In fact, if you make reasonable comparisons, it is hard to make a strong case that philosophy is especially effective, and it is markedly less effective than certain other strategies, such as… teaching critical thinking. The thesis will be completed and available very soon. (I’m happy to give a talk on this material at philosophy departments, but I don’t expect to be swamped with offers.)

It should be fun…

Read Full Post »

Older Posts »